Feb 16 11:06:44 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 11:06:44 crc restorecon[4676]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:44 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 11:06:45 crc restorecon[4676]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 11:06:45 crc restorecon[4676]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 11:06:45 crc kubenswrapper[4797]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 11:06:45 crc kubenswrapper[4797]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 11:06:45 crc kubenswrapper[4797]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 11:06:45 crc kubenswrapper[4797]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 11:06:45 crc kubenswrapper[4797]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 11:06:45 crc kubenswrapper[4797]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.732971 4797 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.737956 4797 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.737979 4797 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.737984 4797 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.737988 4797 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.737992 4797 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.737999 4797 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738004 4797 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738009 4797 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738012 4797 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738017 4797 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738034 4797 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738041 4797 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738046 4797 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738051 4797 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738055 4797 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738060 4797 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738065 4797 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738069 4797 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738074 4797 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738078 4797 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738083 4797 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738088 4797 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738096 4797 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738101 4797 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738109 4797 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738120 4797 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738126 4797 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738133 4797 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738138 4797 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738142 4797 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738146 4797 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738151 4797 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738156 4797 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738161 4797 feature_gate.go:330] unrecognized feature gate: Example Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738170 4797 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738175 4797 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738180 4797 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738185 4797 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738196 4797 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738202 4797 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738208 4797 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738214 4797 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738220 4797 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738227 4797 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738233 4797 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738238 4797 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738248 4797 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738253 4797 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738258 4797 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738291 4797 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738295 4797 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738300 4797 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738305 4797 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738310 4797 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738314 4797 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738319 4797 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738323 4797 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738330 4797 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738339 4797 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738343 4797 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738351 4797 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738358 4797 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738362 4797 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738370 4797 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738375 4797 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738379 4797 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738384 4797 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738388 4797 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738393 4797 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738400 4797 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.738408 4797 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740086 4797 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740367 4797 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740407 4797 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740425 4797 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740441 4797 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740452 4797 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740468 4797 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740487 4797 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740510 4797 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740522 4797 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740534 4797 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740545 4797 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740557 4797 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740569 4797 flags.go:64] FLAG: --cgroup-root="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740618 4797 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740632 4797 flags.go:64] FLAG: --client-ca-file="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740643 4797 flags.go:64] FLAG: --cloud-config="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740655 4797 flags.go:64] FLAG: --cloud-provider="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740666 4797 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740684 4797 flags.go:64] FLAG: --cluster-domain="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740695 4797 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740706 4797 flags.go:64] FLAG: --config-dir="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740716 4797 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740726 4797 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740739 4797 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740748 4797 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740758 4797 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740769 4797 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740779 4797 flags.go:64] FLAG: --contention-profiling="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740788 4797 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740797 4797 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740806 4797 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740815 4797 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740826 4797 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740835 4797 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740844 4797 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740852 4797 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740861 4797 flags.go:64] FLAG: --enable-server="true" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740870 4797 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740883 4797 flags.go:64] FLAG: --event-burst="100" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740892 4797 flags.go:64] FLAG: --event-qps="50" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740900 4797 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740909 4797 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740917 4797 flags.go:64] FLAG: --eviction-hard="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740928 4797 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740937 4797 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740946 4797 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740955 4797 flags.go:64] FLAG: --eviction-soft="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740964 4797 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740973 4797 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740983 4797 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.740991 4797 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741000 4797 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741008 4797 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741017 4797 flags.go:64] FLAG: --feature-gates="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741043 4797 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741052 4797 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741062 4797 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741071 4797 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741082 4797 flags.go:64] FLAG: --healthz-port="10248" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741091 4797 flags.go:64] FLAG: --help="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741100 4797 flags.go:64] FLAG: --hostname-override="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741109 4797 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741118 4797 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741127 4797 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741136 4797 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741145 4797 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741153 4797 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741162 4797 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741171 4797 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741179 4797 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741189 4797 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741199 4797 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741208 4797 flags.go:64] FLAG: --kube-reserved="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741218 4797 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741227 4797 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741237 4797 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741246 4797 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741256 4797 flags.go:64] FLAG: --lock-file="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741264 4797 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741273 4797 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741283 4797 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741297 4797 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741306 4797 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741315 4797 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741324 4797 flags.go:64] FLAG: --logging-format="text" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741333 4797 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741342 4797 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741351 4797 flags.go:64] FLAG: --manifest-url="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741360 4797 flags.go:64] FLAG: --manifest-url-header="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741374 4797 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741383 4797 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741394 4797 flags.go:64] FLAG: --max-pods="110" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741403 4797 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741412 4797 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741421 4797 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741430 4797 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741439 4797 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741448 4797 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741457 4797 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741481 4797 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741490 4797 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741499 4797 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741508 4797 flags.go:64] FLAG: --pod-cidr="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741517 4797 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741529 4797 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741537 4797 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741547 4797 flags.go:64] FLAG: --pods-per-core="0" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741556 4797 flags.go:64] FLAG: --port="10250" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741565 4797 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741574 4797 flags.go:64] FLAG: --provider-id="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741610 4797 flags.go:64] FLAG: --qos-reserved="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741621 4797 flags.go:64] FLAG: --read-only-port="10255" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741638 4797 flags.go:64] FLAG: --register-node="true" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741660 4797 flags.go:64] FLAG: --register-schedulable="true" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741673 4797 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741693 4797 flags.go:64] FLAG: --registry-burst="10" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741703 4797 flags.go:64] FLAG: --registry-qps="5" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741712 4797 flags.go:64] FLAG: --reserved-cpus="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741724 4797 flags.go:64] FLAG: --reserved-memory="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741736 4797 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741744 4797 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741754 4797 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741763 4797 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741772 4797 flags.go:64] FLAG: --runonce="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741780 4797 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741790 4797 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741799 4797 flags.go:64] FLAG: --seccomp-default="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741808 4797 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741817 4797 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741826 4797 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741835 4797 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741844 4797 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741854 4797 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741863 4797 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741871 4797 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741880 4797 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741889 4797 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741898 4797 flags.go:64] FLAG: --system-cgroups="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741907 4797 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741924 4797 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741933 4797 flags.go:64] FLAG: --tls-cert-file="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741941 4797 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741954 4797 flags.go:64] FLAG: --tls-min-version="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741963 4797 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741971 4797 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741981 4797 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.741990 4797 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.742000 4797 flags.go:64] FLAG: --v="2" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.742012 4797 flags.go:64] FLAG: --version="false" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.742023 4797 flags.go:64] FLAG: --vmodule="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.742033 4797 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.742043 4797 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742317 4797 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742329 4797 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742342 4797 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742350 4797 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742359 4797 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742367 4797 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742375 4797 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742383 4797 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742391 4797 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742399 4797 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742406 4797 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742414 4797 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742422 4797 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742430 4797 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742438 4797 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742446 4797 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742454 4797 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742461 4797 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742469 4797 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742476 4797 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742484 4797 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742495 4797 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742505 4797 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742514 4797 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742525 4797 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742535 4797 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742544 4797 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742552 4797 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742561 4797 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742569 4797 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742604 4797 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742614 4797 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742625 4797 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742635 4797 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742645 4797 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742659 4797 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742670 4797 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742680 4797 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742690 4797 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742698 4797 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742706 4797 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742714 4797 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742722 4797 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742730 4797 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742738 4797 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742745 4797 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742753 4797 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742761 4797 feature_gate.go:330] unrecognized feature gate: Example Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742769 4797 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742777 4797 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742785 4797 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742793 4797 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742801 4797 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742810 4797 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742818 4797 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742825 4797 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742832 4797 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742841 4797 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742849 4797 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742856 4797 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742864 4797 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742871 4797 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742879 4797 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742887 4797 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742894 4797 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742902 4797 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742910 4797 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742918 4797 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742926 4797 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742936 4797 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.742945 4797 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.742957 4797 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.754202 4797 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.754251 4797 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754436 4797 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754459 4797 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754470 4797 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754480 4797 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754490 4797 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754499 4797 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754508 4797 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754517 4797 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754526 4797 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754534 4797 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754542 4797 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754551 4797 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754560 4797 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754569 4797 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754613 4797 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754626 4797 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754640 4797 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754653 4797 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754662 4797 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754671 4797 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754680 4797 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754688 4797 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754697 4797 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754706 4797 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754714 4797 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754723 4797 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754731 4797 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754740 4797 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754748 4797 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754757 4797 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754766 4797 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754774 4797 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754782 4797 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754791 4797 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754814 4797 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754823 4797 feature_gate.go:330] unrecognized feature gate: Example Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754832 4797 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754840 4797 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754849 4797 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754857 4797 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754865 4797 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754873 4797 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754882 4797 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754891 4797 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754900 4797 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754910 4797 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754919 4797 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754928 4797 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754940 4797 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754952 4797 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754962 4797 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754971 4797 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754981 4797 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.754991 4797 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755000 4797 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755012 4797 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755023 4797 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755032 4797 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755043 4797 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755053 4797 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755064 4797 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755073 4797 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755082 4797 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755091 4797 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755099 4797 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755112 4797 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755123 4797 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755133 4797 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755142 4797 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755151 4797 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755173 4797 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.755186 4797 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755428 4797 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755440 4797 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755449 4797 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755458 4797 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755467 4797 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755475 4797 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755487 4797 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755497 4797 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755508 4797 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755517 4797 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755525 4797 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755535 4797 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755544 4797 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755554 4797 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755563 4797 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755571 4797 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755613 4797 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755624 4797 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755639 4797 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755649 4797 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755658 4797 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755667 4797 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755676 4797 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755685 4797 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755694 4797 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755702 4797 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755711 4797 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755719 4797 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755728 4797 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755736 4797 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755745 4797 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755753 4797 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755761 4797 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755770 4797 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755799 4797 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755808 4797 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755817 4797 feature_gate.go:330] unrecognized feature gate: Example Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755825 4797 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755834 4797 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755843 4797 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755852 4797 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755860 4797 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755868 4797 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755876 4797 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755885 4797 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755894 4797 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755902 4797 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755910 4797 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755921 4797 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755932 4797 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755942 4797 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755951 4797 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755960 4797 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755968 4797 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755977 4797 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755985 4797 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.755998 4797 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756009 4797 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756019 4797 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756030 4797 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756040 4797 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756049 4797 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756058 4797 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756068 4797 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756077 4797 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756085 4797 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756094 4797 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756102 4797 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756111 4797 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756120 4797 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.756464 4797 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.756480 4797 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.757674 4797 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.764512 4797 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.764634 4797 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.765990 4797 server.go:997] "Starting client certificate rotation" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.766018 4797 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.766208 4797 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-09 00:08:10.091627269 +0000 UTC Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.766337 4797 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 11:06:45 crc kubenswrapper[4797]: E0216 11:06:45.791907 4797 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.792505 4797 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.795372 4797 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.810642 4797 log.go:25] "Validated CRI v1 runtime API" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.847809 4797 log.go:25] "Validated CRI v1 image API" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.850510 4797 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.855252 4797 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-11-02-15-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.855321 4797 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.892111 4797 manager.go:217] Machine: {Timestamp:2026-02-16 11:06:45.888699426 +0000 UTC m=+0.608884506 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:599a276a-da76-4549-96c4-dbb5c7e37426 BootID:fbba5025-2e12-492d-9c5c-fa0555b0b84a Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:be:e7:79 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:be:e7:79 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:3e:d4:dd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:81:6d:d0 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:88:32:45 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6e:f4:70 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:2a:d5:c5:76:28:79 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:02:e2:57:3b:66:11 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.892533 4797 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.892769 4797 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.893216 4797 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.893626 4797 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.893693 4797 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.894705 4797 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.894741 4797 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.895669 4797 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.895712 4797 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.896002 4797 state_mem.go:36] "Initialized new in-memory state store" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.896480 4797 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.901510 4797 kubelet.go:418] "Attempting to sync node with API server" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.901550 4797 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.901670 4797 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.901693 4797 kubelet.go:324] "Adding apiserver pod source" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.901715 4797 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.906852 4797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:45 crc kubenswrapper[4797]: E0216 11:06:45.906988 4797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.906860 4797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:45 crc kubenswrapper[4797]: E0216 11:06:45.907060 4797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.909196 4797 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.911461 4797 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.913680 4797 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.915215 4797 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.915364 4797 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.915456 4797 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.915559 4797 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.915703 4797 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.915792 4797 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.915892 4797 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.915989 4797 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.916084 4797 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.916184 4797 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.916311 4797 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.916404 4797 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.917520 4797 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.918176 4797 server.go:1280] "Started kubelet" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.918233 4797 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.919844 4797 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.919844 4797 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 11:06:45 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.920841 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.920922 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:53:39.16485534 +0000 UTC Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.921161 4797 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.922268 4797 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.927040 4797 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.927070 4797 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.927265 4797 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 11:06:45 crc kubenswrapper[4797]: E0216 11:06:45.928117 4797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="200ms" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.928515 4797 factory.go:55] Registering systemd factory Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.928633 4797 factory.go:221] Registration of the systemd container factory successfully Feb 16 11:06:45 crc kubenswrapper[4797]: E0216 11:06:45.922719 4797 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.928588 4797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:45 crc kubenswrapper[4797]: E0216 11:06:45.928884 4797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.929504 4797 factory.go:153] Registering CRI-O factory Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.929519 4797 factory.go:221] Registration of the crio container factory successfully Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.929616 4797 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.929644 4797 factory.go:103] Registering Raw factory Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.929662 4797 manager.go:1196] Started watching for new ooms in manager Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.930355 4797 manager.go:319] Starting recovery of all containers Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.931002 4797 server.go:460] "Adding debug handlers to kubelet server" Feb 16 11:06:45 crc kubenswrapper[4797]: E0216 11:06:45.933540 4797 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894b566a4b650d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 11:06:45.918150866 +0000 UTC m=+0.638335856,LastTimestamp:2026-02-16 11:06:45.918150866 +0000 UTC m=+0.638335856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.946333 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.946419 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.946437 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.946451 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.946469 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.946522 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.946539 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.946906 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947150 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947174 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947196 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947211 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947226 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947243 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947255 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947270 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947288 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947305 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947367 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947380 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947394 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947409 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.947424 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.949345 4797 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.949384 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.949397 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.949413 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.949432 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.949447 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.949461 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.949473 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.949486 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.949499 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950139 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950203 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950234 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950257 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950278 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950302 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950362 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950383 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950406 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950427 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950449 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950471 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950494 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950516 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950539 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950564 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950616 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950641 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950663 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950684 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950717 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950742 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950766 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950791 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950814 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950832 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950849 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950865 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950881 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950896 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950914 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950932 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950948 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950963 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950979 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.950994 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951009 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951023 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951036 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951050 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951065 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951079 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951094 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951109 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951124 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951141 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951156 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951172 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951186 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951201 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951217 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951232 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951248 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951269 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951286 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951304 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951322 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951339 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951358 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951379 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951399 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951419 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951437 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951454 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951470 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951486 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951503 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951517 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951532 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951546 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951560 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951600 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951633 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951653 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951671 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951688 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951705 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951721 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951736 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951753 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951770 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951786 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951803 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951818 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951832 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951849 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951864 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951880 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951895 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951913 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951933 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951953 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951972 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.951993 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952012 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952034 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952055 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952076 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952096 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952114 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952133 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952153 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952174 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952193 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952212 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952231 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952250 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952269 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952289 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952309 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952335 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952357 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952375 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952394 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952417 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952438 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952458 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952476 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952495 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952514 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952533 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952554 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952572 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952622 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952643 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952664 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952682 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952702 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952724 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952784 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952802 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952823 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952841 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952860 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952878 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952896 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952913 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952933 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952951 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952970 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.952988 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953006 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953030 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953052 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953071 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953098 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953117 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953137 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953155 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953174 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953190 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953206 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953228 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953248 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953266 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953286 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953308 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953329 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953354 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953376 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953396 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953416 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953435 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953454 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953472 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953491 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953510 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953528 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953545 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953565 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953609 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953629 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953648 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953666 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953686 4797 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953706 4797 reconstruct.go:97] "Volume reconstruction finished" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.953720 4797 reconciler.go:26] "Reconciler: start to sync state" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.955759 4797 manager.go:324] Recovery completed Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.969195 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.970626 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.970666 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.970691 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.971218 4797 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.971234 4797 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.971250 4797 state_mem.go:36] "Initialized new in-memory state store" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.979618 4797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.981383 4797 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.981442 4797 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.981479 4797 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 11:06:45 crc kubenswrapper[4797]: E0216 11:06:45.981709 4797 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.984969 4797 policy_none.go:49] "None policy: Start" Feb 16 11:06:45 crc kubenswrapper[4797]: W0216 11:06:45.986767 4797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:45 crc kubenswrapper[4797]: E0216 11:06:45.986862 4797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.987824 4797 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 11:06:45 crc kubenswrapper[4797]: I0216 11:06:45.987885 4797 state_mem.go:35] "Initializing new in-memory state store" Feb 16 11:06:46 crc kubenswrapper[4797]: E0216 11:06:46.029253 4797 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.044320 4797 manager.go:334] "Starting Device Plugin manager" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.044381 4797 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.044396 4797 server.go:79] "Starting device plugin registration server" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.045058 4797 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.045078 4797 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.045323 4797 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.045463 4797 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.045477 4797 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 11:06:46 crc kubenswrapper[4797]: E0216 11:06:46.058833 4797 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.082072 4797 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.082224 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.083456 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.083486 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.083494 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.083606 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.083883 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.083955 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.084981 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.085014 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.085038 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.085049 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.085037 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.085068 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.085306 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.085485 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.085549 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.086653 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.086683 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.086694 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.087526 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.087562 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.087593 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.087693 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.087838 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.087910 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.088340 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.088380 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.088393 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.088563 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.088671 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.088702 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.088838 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.088902 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.088914 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.089747 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.089778 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.089792 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.089965 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.090002 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.090039 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.090058 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.090068 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.090599 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.090641 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.090665 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:46 crc kubenswrapper[4797]: E0216 11:06:46.128830 4797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="400ms" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.145319 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.146344 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.146410 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.146424 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.146453 4797 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 11:06:46 crc kubenswrapper[4797]: E0216 11:06:46.147092 4797 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.158204 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.158682 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.158816 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.158937 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.159045 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.159142 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.159252 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.159366 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.159473 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.159619 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.159773 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.159880 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.159979 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.160087 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.160189 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.261426 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.261516 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.261617 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.261658 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.261693 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.261731 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.261767 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.261802 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.261898 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.261932 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.261964 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.261997 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262029 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262060 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262091 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262211 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262213 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262288 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262296 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262307 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262388 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262424 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262429 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262473 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262440 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262463 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262493 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262410 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262554 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.262763 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.348225 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.350247 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.350305 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.350322 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.350354 4797 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 11:06:46 crc kubenswrapper[4797]: E0216 11:06:46.351064 4797 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.408320 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.415377 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.428808 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.445497 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.450703 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:46 crc kubenswrapper[4797]: W0216 11:06:46.471464 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8ce61fb3759ce759d763f9afa4ad69a84fe44a4db0ebe6e5a566183b0e29492d WatchSource:0}: Error finding container 8ce61fb3759ce759d763f9afa4ad69a84fe44a4db0ebe6e5a566183b0e29492d: Status 404 returned error can't find the container with id 8ce61fb3759ce759d763f9afa4ad69a84fe44a4db0ebe6e5a566183b0e29492d Feb 16 11:06:46 crc kubenswrapper[4797]: W0216 11:06:46.475434 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-1f003c2718b6a128e8a18183f940f246148dae4ab8e3343dbc012a48a06351ff WatchSource:0}: Error finding container 1f003c2718b6a128e8a18183f940f246148dae4ab8e3343dbc012a48a06351ff: Status 404 returned error can't find the container with id 1f003c2718b6a128e8a18183f940f246148dae4ab8e3343dbc012a48a06351ff Feb 16 11:06:46 crc kubenswrapper[4797]: W0216 11:06:46.477535 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-7f957267d31d7b1cda9094015eb7e97c8edbf009f31ae418ddd022f6888d3694 WatchSource:0}: Error finding container 7f957267d31d7b1cda9094015eb7e97c8edbf009f31ae418ddd022f6888d3694: Status 404 returned error can't find the container with id 7f957267d31d7b1cda9094015eb7e97c8edbf009f31ae418ddd022f6888d3694 Feb 16 11:06:46 crc kubenswrapper[4797]: W0216 11:06:46.481380 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-73a64448ac5633b3778d235857702ee476de60016adcb8a8eaad2678d831a620 WatchSource:0}: Error finding container 73a64448ac5633b3778d235857702ee476de60016adcb8a8eaad2678d831a620: Status 404 returned error can't find the container with id 73a64448ac5633b3778d235857702ee476de60016adcb8a8eaad2678d831a620 Feb 16 11:06:46 crc kubenswrapper[4797]: W0216 11:06:46.492232 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-5f45bf4751ff6dd0468631f796eb0004f32b73756c00943c24f5d778ff414d77 WatchSource:0}: Error finding container 5f45bf4751ff6dd0468631f796eb0004f32b73756c00943c24f5d778ff414d77: Status 404 returned error can't find the container with id 5f45bf4751ff6dd0468631f796eb0004f32b73756c00943c24f5d778ff414d77 Feb 16 11:06:46 crc kubenswrapper[4797]: E0216 11:06:46.529887 4797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="800ms" Feb 16 11:06:46 crc kubenswrapper[4797]: W0216 11:06:46.711346 4797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:46 crc kubenswrapper[4797]: E0216 11:06:46.711463 4797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.752133 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.753248 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.753289 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.753299 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.753322 4797 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 11:06:46 crc kubenswrapper[4797]: E0216 11:06:46.753896 4797 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.919904 4797 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.922011 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:03:44.46002391 +0000 UTC Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.985282 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8ce61fb3759ce759d763f9afa4ad69a84fe44a4db0ebe6e5a566183b0e29492d"} Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.986267 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5f45bf4751ff6dd0468631f796eb0004f32b73756c00943c24f5d778ff414d77"} Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.987147 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"73a64448ac5633b3778d235857702ee476de60016adcb8a8eaad2678d831a620"} Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.988163 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7f957267d31d7b1cda9094015eb7e97c8edbf009f31ae418ddd022f6888d3694"} Feb 16 11:06:46 crc kubenswrapper[4797]: I0216 11:06:46.989309 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1f003c2718b6a128e8a18183f940f246148dae4ab8e3343dbc012a48a06351ff"} Feb 16 11:06:47 crc kubenswrapper[4797]: W0216 11:06:47.016179 4797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:47 crc kubenswrapper[4797]: E0216 11:06:47.016271 4797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:47 crc kubenswrapper[4797]: W0216 11:06:47.239763 4797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:47 crc kubenswrapper[4797]: E0216 11:06:47.240220 4797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:47 crc kubenswrapper[4797]: W0216 11:06:47.293647 4797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:47 crc kubenswrapper[4797]: E0216 11:06:47.293708 4797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:47 crc kubenswrapper[4797]: E0216 11:06:47.330992 4797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="1.6s" Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.554719 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.556381 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.556420 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.556430 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.556488 4797 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 11:06:47 crc kubenswrapper[4797]: E0216 11:06:47.557156 4797 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.812487 4797 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 11:06:47 crc kubenswrapper[4797]: E0216 11:06:47.813699 4797 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.919703 4797 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.922618 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:12:56.776603354 +0000 UTC Feb 16 11:06:47 crc kubenswrapper[4797]: E0216 11:06:47.976874 4797 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894b566a4b650d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 11:06:45.918150866 +0000 UTC m=+0.638335856,LastTimestamp:2026-02-16 11:06:45.918150866 +0000 UTC m=+0.638335856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.996755 4797 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9" exitCode=0 Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.996875 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9"} Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.996933 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.998132 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.998187 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:47 crc kubenswrapper[4797]: I0216 11:06:47.998206 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.001027 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0"} Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.001086 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347"} Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.001118 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953"} Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.001087 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.001137 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4"} Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.002334 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.002400 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.002415 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.004727 4797 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f" exitCode=0 Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.004834 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f"} Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.004993 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.006198 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.006236 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.006248 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.008620 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.008637 4797 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bfb8bf618f1da60c2e2200452e17bfbcaec2ee0a502c5e468dbe2a8216eaa0ec" exitCode=0 Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.008677 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bfb8bf618f1da60c2e2200452e17bfbcaec2ee0a502c5e468dbe2a8216eaa0ec"} Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.008790 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.009834 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.009869 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.009882 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.010567 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.010651 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.010671 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.011975 4797 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="44a41fc51d7bbc1283bb9896ce89b415267374405ca087fc40fd8f80fbae4cc3" exitCode=0 Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.012031 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"44a41fc51d7bbc1283bb9896ce89b415267374405ca087fc40fd8f80fbae4cc3"} Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.012139 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.013424 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.013474 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.013491 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:48 crc kubenswrapper[4797]: W0216 11:06:48.726084 4797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:48 crc kubenswrapper[4797]: E0216 11:06:48.726172 4797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.920689 4797 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:48 crc kubenswrapper[4797]: I0216 11:06:48.923562 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:45:25.305626785 +0000 UTC Feb 16 11:06:48 crc kubenswrapper[4797]: E0216 11:06:48.931869 4797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="3.2s" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.017001 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e03ac68651e6f65e2295acfcc538003af7c162a7fb76761c3e28d3b15e1c0c13"} Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.017079 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.018277 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.018320 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.018333 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.020322 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065"} Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.020395 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae"} Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.020411 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be"} Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.020435 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.021559 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.021606 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.021617 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.023898 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057"} Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.023953 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52"} Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.023970 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485"} Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.023989 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38"} Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.025598 4797 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1fe949bb7dd3abb04b1312984b3ba50a2ac5456997e75286042b76a674f33898" exitCode=0 Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.025676 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1fe949bb7dd3abb04b1312984b3ba50a2ac5456997e75286042b76a674f33898"} Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.025742 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.025762 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.026846 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.026884 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.026897 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.027430 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.027457 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.027469 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.157962 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.159312 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.159520 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.159543 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.159600 4797 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 11:06:49 crc kubenswrapper[4797]: E0216 11:06:49.160108 4797 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.192:6443: connect: connection refused" node="crc" Feb 16 11:06:49 crc kubenswrapper[4797]: W0216 11:06:49.564286 4797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:49 crc kubenswrapper[4797]: E0216 11:06:49.564367 4797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:49 crc kubenswrapper[4797]: W0216 11:06:49.795757 4797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:49 crc kubenswrapper[4797]: E0216 11:06:49.795866 4797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.920097 4797 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:49 crc kubenswrapper[4797]: I0216 11:06:49.924197 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:01:19.710838115 +0000 UTC Feb 16 11:06:50 crc kubenswrapper[4797]: W0216 11:06:50.019208 4797 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.192:6443: connect: connection refused Feb 16 11:06:50 crc kubenswrapper[4797]: E0216 11:06:50.019351 4797 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.192:6443: connect: connection refused" logger="UnhandledError" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.032748 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b35f505e0d6cd8526da203d9acab34b661d4396cbb286ac40e12fc12434303db"} Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.032918 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.033922 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.033956 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.033970 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.036537 4797 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1ac5a93c9d4dac107e9798f3ea98b14180ce9ad38fa1048850568c176ab08832" exitCode=0 Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.036677 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.037450 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.046306 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1ac5a93c9d4dac107e9798f3ea98b14180ce9ad38fa1048850568c176ab08832"} Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.046380 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.046388 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.047125 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.047173 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.047191 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.047169 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.047806 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.047871 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.047927 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.047954 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.047969 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:50 crc kubenswrapper[4797]: I0216 11:06:50.924382 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:11:11.322116773 +0000 UTC Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.040858 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.042817 4797 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b35f505e0d6cd8526da203d9acab34b661d4396cbb286ac40e12fc12434303db" exitCode=255 Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.042940 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.042944 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b35f505e0d6cd8526da203d9acab34b661d4396cbb286ac40e12fc12434303db"} Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.043914 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.043955 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.043970 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.044671 4797 scope.go:117] "RemoveContainer" containerID="b35f505e0d6cd8526da203d9acab34b661d4396cbb286ac40e12fc12434303db" Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.046499 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8b3a42a006bd7e94f2d8bf0eb3497c6855085a7b46bc9b6160e2374b622093f9"} Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.046538 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5515231a0c89ca3dc95a5a7378dd7d8423a64cb385913c2896fff07d732f5577"} Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.046554 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"037be71a565fee6fccf499bb13d62caa8649d7e7b509f68b998dd7180c85d6b6"} Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.046615 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.047666 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.047719 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.047738 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.451331 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:51 crc kubenswrapper[4797]: I0216 11:06:51.925521 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:53:16.608061438 +0000 UTC Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.053972 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.056922 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1"} Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.057042 4797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.057114 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.058619 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.058652 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.058664 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.065181 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0cf8efc14db2b408cd36560a7acc7da0745dd59512eb8a4844d76a406658106e"} Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.065243 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"afbc0d06905291749751153453b35b030114f2ace32e976e9df9b2146bb62fe9"} Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.065413 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.066722 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.066783 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.066800 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.144562 4797 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.360560 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.362538 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.362574 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.362603 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.362624 4797 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 11:06:52 crc kubenswrapper[4797]: I0216 11:06:52.926137 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 04:57:59.368288873 +0000 UTC Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.038016 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.067974 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.067991 4797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.068112 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.069014 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.069071 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.069088 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.070229 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.070281 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.070299 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.555703 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.555981 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.557667 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.557729 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.557749 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:53 crc kubenswrapper[4797]: I0216 11:06:53.926435 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 09:45:26.494541388 +0000 UTC Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.069771 4797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.069831 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.070732 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.070790 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.070811 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.216902 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.217184 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.218473 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.218551 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.218635 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.456285 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.456509 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.458077 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.458140 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.458157 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.712445 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:06:54 crc kubenswrapper[4797]: I0216 11:06:54.927202 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:58:10.003582218 +0000 UTC Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.071846 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.073180 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.073217 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.073226 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.369679 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.369934 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.371397 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.371463 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.371488 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.831974 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.832148 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.833489 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.833542 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.833560 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.839090 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:55 crc kubenswrapper[4797]: I0216 11:06:55.928129 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:16:31.766343432 +0000 UTC Feb 16 11:06:56 crc kubenswrapper[4797]: E0216 11:06:56.059119 4797 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 11:06:56 crc kubenswrapper[4797]: I0216 11:06:56.074930 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:56 crc kubenswrapper[4797]: I0216 11:06:56.076044 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:56 crc kubenswrapper[4797]: I0216 11:06:56.076094 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:56 crc kubenswrapper[4797]: I0216 11:06:56.076106 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:56 crc kubenswrapper[4797]: I0216 11:06:56.080615 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:56 crc kubenswrapper[4797]: I0216 11:06:56.929105 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 21:16:04.698524569 +0000 UTC Feb 16 11:06:57 crc kubenswrapper[4797]: I0216 11:06:57.078812 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:57 crc kubenswrapper[4797]: I0216 11:06:57.079927 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:57 crc kubenswrapper[4797]: I0216 11:06:57.079966 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:57 crc kubenswrapper[4797]: I0216 11:06:57.080004 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:57 crc kubenswrapper[4797]: I0216 11:06:57.085076 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:06:57 crc kubenswrapper[4797]: I0216 11:06:57.217607 4797 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 11:06:57 crc kubenswrapper[4797]: I0216 11:06:57.217695 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 11:06:57 crc kubenswrapper[4797]: I0216 11:06:57.929498 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:22:58.508974769 +0000 UTC Feb 16 11:06:58 crc kubenswrapper[4797]: I0216 11:06:58.081535 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:06:58 crc kubenswrapper[4797]: I0216 11:06:58.082855 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:06:58 crc kubenswrapper[4797]: I0216 11:06:58.082935 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:06:58 crc kubenswrapper[4797]: I0216 11:06:58.082953 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:06:58 crc kubenswrapper[4797]: I0216 11:06:58.930083 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:35:00.91901194 +0000 UTC Feb 16 11:06:59 crc kubenswrapper[4797]: I0216 11:06:59.930447 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 17:58:35.475667251 +0000 UTC Feb 16 11:07:00 crc kubenswrapper[4797]: I0216 11:07:00.920903 4797 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 16 11:07:00 crc kubenswrapper[4797]: I0216 11:07:00.931208 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 19:45:53.012748866 +0000 UTC Feb 16 11:07:01 crc kubenswrapper[4797]: I0216 11:07:01.792400 4797 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 11:07:01 crc kubenswrapper[4797]: I0216 11:07:01.792479 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 11:07:01 crc kubenswrapper[4797]: I0216 11:07:01.836237 4797 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 11:07:01 crc kubenswrapper[4797]: I0216 11:07:01.836622 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 11:07:01 crc kubenswrapper[4797]: I0216 11:07:01.932969 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:23:52.196425709 +0000 UTC Feb 16 11:07:02 crc kubenswrapper[4797]: I0216 11:07:02.934285 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 20:59:53.537527695 +0000 UTC Feb 16 11:07:03 crc kubenswrapper[4797]: I0216 11:07:03.043387 4797 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]log ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]etcd ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/priority-and-fairness-filter ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/start-apiextensions-informers ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/start-apiextensions-controllers ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/crd-informer-synced ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/start-system-namespaces-controller ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 16 11:07:03 crc kubenswrapper[4797]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/bootstrap-controller ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/start-kube-aggregator-informers ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/apiservice-registration-controller ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/apiservice-discovery-controller ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]autoregister-completion ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/apiservice-openapi-controller ok Feb 16 11:07:03 crc kubenswrapper[4797]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 16 11:07:03 crc kubenswrapper[4797]: livez check failed Feb 16 11:07:03 crc kubenswrapper[4797]: I0216 11:07:03.044293 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:07:03 crc kubenswrapper[4797]: I0216 11:07:03.935308 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 04:41:19.102960524 +0000 UTC Feb 16 11:07:04 crc kubenswrapper[4797]: I0216 11:07:04.484437 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 11:07:04 crc kubenswrapper[4797]: I0216 11:07:04.484697 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:07:04 crc kubenswrapper[4797]: I0216 11:07:04.486200 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:04 crc kubenswrapper[4797]: I0216 11:07:04.486269 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:04 crc kubenswrapper[4797]: I0216 11:07:04.486286 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:04 crc kubenswrapper[4797]: I0216 11:07:04.501531 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 11:07:04 crc kubenswrapper[4797]: I0216 11:07:04.935848 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 10:12:38.854278605 +0000 UTC Feb 16 11:07:05 crc kubenswrapper[4797]: I0216 11:07:05.099842 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:07:05 crc kubenswrapper[4797]: I0216 11:07:05.101253 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:05 crc kubenswrapper[4797]: I0216 11:07:05.101313 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:05 crc kubenswrapper[4797]: I0216 11:07:05.101330 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:05 crc kubenswrapper[4797]: I0216 11:07:05.936561 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 08:33:39.310773472 +0000 UTC Feb 16 11:07:06 crc kubenswrapper[4797]: E0216 11:07:06.059477 4797 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 11:07:06 crc kubenswrapper[4797]: E0216 11:07:06.800889 4797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.804243 4797 trace.go:236] Trace[576292393]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 11:06:55.352) (total time: 11451ms): Feb 16 11:07:06 crc kubenswrapper[4797]: Trace[576292393]: ---"Objects listed" error: 11451ms (11:07:06.804) Feb 16 11:07:06 crc kubenswrapper[4797]: Trace[576292393]: [11.451387615s] [11.451387615s] END Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.804277 4797 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 11:07:06 crc kubenswrapper[4797]: E0216 11:07:06.805043 4797 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.811075 4797 trace.go:236] Trace[1793308475]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 11:06:53.689) (total time: 13121ms): Feb 16 11:07:06 crc kubenswrapper[4797]: Trace[1793308475]: ---"Objects listed" error: 13121ms (11:07:06.810) Feb 16 11:07:06 crc kubenswrapper[4797]: Trace[1793308475]: [13.121560272s] [13.121560272s] END Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.811421 4797 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.811511 4797 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.811676 4797 trace.go:236] Trace[811888131]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 11:06:53.375) (total time: 13436ms): Feb 16 11:07:06 crc kubenswrapper[4797]: Trace[811888131]: ---"Objects listed" error: 13436ms (11:07:06.811) Feb 16 11:07:06 crc kubenswrapper[4797]: Trace[811888131]: [13.436324138s] [13.436324138s] END Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.811707 4797 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.811911 4797 trace.go:236] Trace[1041102886]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 11:06:53.793) (total time: 13017ms): Feb 16 11:07:06 crc kubenswrapper[4797]: Trace[1041102886]: ---"Objects listed" error: 13017ms (11:07:06.811) Feb 16 11:07:06 crc kubenswrapper[4797]: Trace[1041102886]: [13.017982345s] [13.017982345s] END Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.811942 4797 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.814310 4797 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.913806 4797 apiserver.go:52] "Watching apiserver" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.918302 4797 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.918528 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.919217 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.919335 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:06 crc kubenswrapper[4797]: E0216 11:07:06.919464 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.919477 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.919634 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.919774 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 11:07:06 crc kubenswrapper[4797]: E0216 11:07:06.919851 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.920021 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:06 crc kubenswrapper[4797]: E0216 11:07:06.920067 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.921460 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.921566 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.921638 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.921769 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.922167 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.922474 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.922485 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.922790 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.923905 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.928082 4797 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.938248 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:32:51.79749321 +0000 UTC Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.950505 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.955345 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.960136 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.961345 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.966678 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.970203 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.981467 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.988414 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:06 crc kubenswrapper[4797]: I0216 11:07:06.998816 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.006685 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.012836 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.012889 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.012920 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.012942 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.012969 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.012987 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013009 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013026 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013046 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013062 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013082 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013103 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013119 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013136 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013153 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013178 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013196 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013222 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013248 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013292 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013311 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013297 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013328 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013348 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013345 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013374 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013383 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013397 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013423 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013443 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013467 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013491 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013515 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013534 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013552 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013552 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013639 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013642 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013663 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013684 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013702 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013719 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013736 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013758 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013772 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013784 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013806 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013822 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013938 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013963 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.013983 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014439 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014462 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014479 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014498 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014517 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014535 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014553 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014594 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014614 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014635 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014657 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014676 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014702 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014730 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014766 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014790 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014809 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014832 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014849 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014868 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014888 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014910 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014930 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014954 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015018 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015051 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015075 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015101 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015130 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015156 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015183 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015217 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015235 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015255 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015279 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015306 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015326 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015347 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015367 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015387 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015407 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015435 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015455 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015475 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015497 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015515 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015534 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015553 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015570 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015617 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015636 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015654 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015675 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015696 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015713 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015731 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015749 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015768 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015790 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015813 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015831 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015848 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015866 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015882 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015902 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015920 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015937 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015953 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015970 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015987 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016004 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016026 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016044 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016064 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016082 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016102 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016121 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016139 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016159 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016179 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016200 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016220 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016240 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016264 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016293 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016319 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016341 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016364 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016386 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016407 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016427 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016446 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016469 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016489 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016521 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016543 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016564 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016605 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016632 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016662 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016682 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016704 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016723 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016741 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016763 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016783 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016804 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016834 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016854 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016876 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016897 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016919 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016940 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016959 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016977 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016999 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017017 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017038 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017060 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017082 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017101 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017120 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017138 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017158 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017188 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017209 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017227 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017248 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017270 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017292 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017312 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017333 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017351 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017373 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017398 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017430 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017456 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017474 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017493 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017516 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017538 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017560 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017599 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017622 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017642 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014036 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.021823 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014049 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014059 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014082 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014132 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014193 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014237 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014299 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014382 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014417 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014475 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014611 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014659 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014673 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014789 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014890 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014915 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.014945 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015162 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015170 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015398 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015455 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015593 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.015649 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016027 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016173 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016303 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.022067 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016350 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016542 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016710 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016737 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016696 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016868 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.016970 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017249 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017560 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.017728 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:07:07.517652764 +0000 UTC m=+22.237837954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017781 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.017914 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.018518 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.018861 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.019121 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.019139 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.019335 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.019441 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.019623 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.019685 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.019729 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.019743 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.019922 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.020033 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.020038 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.020061 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.020068 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.020155 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.020192 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.020248 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.020289 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.020482 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.020829 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.021420 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.021633 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.022095 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.022115 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.022351 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.022397 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.022531 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.022748 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.022777 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.022808 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.022875 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.023076 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.023132 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.023134 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.023433 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.023556 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.023628 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.023914 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.023935 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.024049 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.024057 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.024180 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.024389 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.024646 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.024796 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025164 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025200 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.024736 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025282 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025313 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025333 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025351 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025371 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025422 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025451 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025473 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025492 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025514 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025535 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025555 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025595 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025619 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025647 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025674 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025696 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025718 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025746 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025826 4797 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025843 4797 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025860 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025874 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025885 4797 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025896 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025907 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025920 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025961 4797 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025992 4797 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.027635 4797 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.030924 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025265 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025308 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025474 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.031311 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.025520 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.026302 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.026775 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.026922 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.026940 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.026998 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.027219 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.027391 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.027447 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.027407 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.029700 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.029724 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.030113 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.030399 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.030475 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.030515 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.030664 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.030680 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.030836 4797 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.031823 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.031987 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:07.531960535 +0000 UTC m=+22.252145515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.030985 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.032040 4797 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.032336 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.032620 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:07.532562358 +0000 UTC m=+22.252747528 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.031279 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.032631 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.031370 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.032991 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.033044 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.033349 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.033960 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.034648 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.030977 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.034730 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.035127 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.035481 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.035566 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.035879 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.036298 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.036403 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.036818 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.036859 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.036928 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.036975 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.036993 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.037625 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.037880 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.038489 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.037262 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.039248 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.039522 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.046254 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.046309 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.046325 4797 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.046393 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:07.546372339 +0000 UTC m=+22.266557549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.047314 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.047411 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.048180 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.048331 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.048601 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.048624 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.048637 4797 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.048652 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.048694 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:07.54867215 +0000 UTC m=+22.268857350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.049083 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.049244 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.049257 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.049300 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.049413 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.049559 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.050332 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.050438 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.050206 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.050364 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.050875 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.051170 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.051368 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.051618 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.052770 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.052921 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.053914 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.054367 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.057120 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.057405 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.058986 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.059958 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.060052 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.060416 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.064710 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.065982 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.067653 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.067925 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.067933 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.067959 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.068088 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.068111 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.068537 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.068509 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.068635 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.068738 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.069176 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.069376 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.069546 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.069547 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.069789 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.069847 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.069947 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.070146 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.070346 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.071895 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.072030 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.076455 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.078106 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.079400 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.084958 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.088331 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.094067 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.112484 4797 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126665 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126730 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126787 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126801 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126813 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126824 4797 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126827 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126837 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126862 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126888 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126906 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126919 4797 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126929 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126938 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126946 4797 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126955 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126966 4797 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126974 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126983 4797 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.126992 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127001 4797 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127009 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127018 4797 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127026 4797 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127035 4797 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127042 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127050 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127059 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127068 4797 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127075 4797 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127083 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127091 4797 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127101 4797 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127108 4797 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127117 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127125 4797 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127133 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127143 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127152 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127161 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127170 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127178 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127187 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127196 4797 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127205 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127213 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127221 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127229 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127238 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127245 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127255 4797 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127263 4797 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127271 4797 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127280 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127288 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127297 4797 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127306 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127314 4797 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127322 4797 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127330 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127338 4797 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127345 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127354 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127362 4797 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127369 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127378 4797 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127385 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127393 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127402 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127434 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127444 4797 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127453 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127461 4797 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127468 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127476 4797 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127484 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127492 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127501 4797 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127510 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127518 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127526 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127535 4797 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127543 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127551 4797 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127559 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127567 4797 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127575 4797 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127607 4797 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127616 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127624 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127633 4797 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127641 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127648 4797 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127656 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127664 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127673 4797 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127681 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127689 4797 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127697 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127710 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127718 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127726 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127733 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127742 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127750 4797 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127758 4797 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127766 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127775 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127783 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127791 4797 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127799 4797 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127808 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127816 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127824 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127832 4797 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127840 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127848 4797 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127856 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127865 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127878 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127886 4797 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127894 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127902 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127911 4797 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127919 4797 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127927 4797 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127935 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127943 4797 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127952 4797 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127961 4797 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127969 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127980 4797 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127988 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.127997 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128008 4797 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128016 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128025 4797 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128033 4797 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128042 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128050 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128059 4797 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128067 4797 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128077 4797 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128084 4797 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128093 4797 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128100 4797 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128108 4797 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128116 4797 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128124 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128133 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128141 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128149 4797 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128156 4797 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128164 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128172 4797 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128181 4797 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128189 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128198 4797 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128206 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128215 4797 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128223 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128230 4797 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128238 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128248 4797 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128256 4797 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128264 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128272 4797 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128280 4797 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128288 4797 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128295 4797 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128303 4797 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128311 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128319 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128329 4797 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128338 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128346 4797 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128355 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128363 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128371 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128379 4797 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128388 4797 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128395 4797 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128403 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128412 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128419 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128428 4797 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128435 4797 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.128443 4797 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.235664 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.244146 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.249267 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 11:07:07 crc kubenswrapper[4797]: W0216 11:07:07.257905 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-493b34d522a84b4cff411e4fc7e703f05dad3d4c2ddad84e97e75b3a02ba9c88 WatchSource:0}: Error finding container 493b34d522a84b4cff411e4fc7e703f05dad3d4c2ddad84e97e75b3a02ba9c88: Status 404 returned error can't find the container with id 493b34d522a84b4cff411e4fc7e703f05dad3d4c2ddad84e97e75b3a02ba9c88 Feb 16 11:07:07 crc kubenswrapper[4797]: W0216 11:07:07.265622 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-5f6314275e8810381c8bf29c9bc9ee1a14fec11b26ef7231856dd21a7bd06e62 WatchSource:0}: Error finding container 5f6314275e8810381c8bf29c9bc9ee1a14fec11b26ef7231856dd21a7bd06e62: Status 404 returned error can't find the container with id 5f6314275e8810381c8bf29c9bc9ee1a14fec11b26ef7231856dd21a7bd06e62 Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.405720 4797 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50976->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.405773 4797 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44560->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.405793 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50976->192.168.126.11:17697: read: connection reset by peer" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.405835 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:44560->192.168.126.11:17697: read: connection reset by peer" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.531218 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.531382 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:07:08.531349295 +0000 UTC m=+23.251534275 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.532081 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.532254 4797 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.532322 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:08.532309465 +0000 UTC m=+23.252494485 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.633186 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.633291 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.633332 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.633400 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.633422 4797 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.633435 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.633447 4797 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.633488 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:08.633467749 +0000 UTC m=+23.353652749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.633510 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:08.63349932 +0000 UTC m=+23.353684310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.633530 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.633541 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.633548 4797 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:07 crc kubenswrapper[4797]: E0216 11:07:07.633600 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:08.633571511 +0000 UTC m=+23.353756491 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.939874 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 16:08:23.745519717 +0000 UTC Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.978489 4797 csr.go:261] certificate signing request csr-vtvtf is approved, waiting to be issued Feb 16 11:07:07 crc kubenswrapper[4797]: I0216 11:07:07.985297 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.114906 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.115695 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.117445 4797 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1" exitCode=255 Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.162894 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.261066 4797 csr.go:257] certificate signing request csr-vtvtf is issued Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.469976 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.540367 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.540462 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.540639 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:07:10.540571861 +0000 UTC m=+25.260756851 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.540652 4797 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.540730 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:10.540712584 +0000 UTC m=+25.260897564 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.641061 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.641117 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.641242 4797 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.641365 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:10.641336386 +0000 UTC m=+25.361521396 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.641254 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.641366 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.641426 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.641440 4797 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.641400 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.641514 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.641527 4797 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.641489 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:10.641473699 +0000 UTC m=+25.361658739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.641589 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:10.641563391 +0000 UTC m=+25.361748371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.641802 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.660405 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.661498 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.662348 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.668334 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.669570 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.670249 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.670904 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.671735 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.672307 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.672905 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.679484 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.680291 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.681461 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.681997 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.682879 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.683622 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.684412 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.685143 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.685700 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.688464 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.689245 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.690024 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.802450 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.802945 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.803753 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.804246 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.804743 4797 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.804846 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.806205 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.806751 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.807207 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.808369 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.809058 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.809567 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.810238 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.810907 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.814195 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.814787 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.815728 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.816312 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.817192 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.817728 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.818557 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.819271 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.820108 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.820537 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.821346 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.821874 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.822386 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.823188 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.823680 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-lkgrl"] Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.823975 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.823996 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-rd6dh"] Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.824134 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.824337 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-5qvbt"] Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.824460 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59"} Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.824482 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-8h8ld"] Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.824598 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.824669 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rd6dh" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.825437 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5f6314275e8810381c8bf29c9bc9ee1a14fec11b26ef7231856dd21a7bd06e62"} Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.825501 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"2eb2c2fea133922d1349dc2fa038ba6cb2447c29cc24d062dc3c6934c7b752b6"} Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.825511 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05"} Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.825523 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h9hsp"] Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.825643 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.826046 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.826696 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d"} Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.826747 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"493b34d522a84b4cff411e4fc7e703f05dad3d4c2ddad84e97e75b3a02ba9c88"} Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.826769 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1"} Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.826816 4797 scope.go:117] "RemoveContainer" containerID="b35f505e0d6cd8526da203d9acab34b661d4396cbb286ac40e12fc12434303db" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.827037 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.827059 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.827907 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.829130 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.829986 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.830448 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.831223 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.834178 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.834500 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.834687 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.834777 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.834948 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.834976 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.835091 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.835195 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.835198 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.835262 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.835310 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.835393 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.835429 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.835545 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.835750 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.835881 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.839690 4797 scope.go:117] "RemoveContainer" containerID="cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.839852 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.839905 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842134 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/377bb3bb-1c3d-4cc5-a159-2d116f464492-os-release\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842196 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-kubelet\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842223 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-slash\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842246 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-cni-bin\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842267 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/812f1f08-469d-44f4-907e-60ad61837364-ovn-node-metrics-cert\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842290 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9532a098-7e41-454c-af48-44f9a9478d12-cni-binary-copy\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842315 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rszb5\" (UniqueName: \"kubernetes.io/projected/9532a098-7e41-454c-af48-44f9a9478d12-kube-api-access-rszb5\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842339 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xtl6\" (UniqueName: \"kubernetes.io/projected/6e28dd15-03ea-4c9f-94d0-7b953d0c4044-kube-api-access-8xtl6\") pod \"node-resolver-rd6dh\" (UID: \"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\") " pod="openshift-dns/node-resolver-rd6dh" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842358 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-etc-openvswitch\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842380 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/128f4e85-fd17-4281-97d2-872fda792b21-mcd-auth-proxy-config\") pod \"machine-config-daemon-lkgrl\" (UID: \"128f4e85-fd17-4281-97d2-872fda792b21\") " pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842435 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6e28dd15-03ea-4c9f-94d0-7b953d0c4044-hosts-file\") pod \"node-resolver-rd6dh\" (UID: \"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\") " pod="openshift-dns/node-resolver-rd6dh" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842458 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842477 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv4sj\" (UniqueName: \"kubernetes.io/projected/812f1f08-469d-44f4-907e-60ad61837364-kube-api-access-mv4sj\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842529 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-env-overrides\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842669 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-openvswitch\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842701 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-log-socket\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842721 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-run-ovn-kubernetes\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842741 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-multus-conf-dir\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842781 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-systemd\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842795 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-node-log\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842812 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/377bb3bb-1c3d-4cc5-a159-2d116f464492-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842828 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-os-release\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842842 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-etc-kubernetes\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842855 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/377bb3bb-1c3d-4cc5-a159-2d116f464492-system-cni-dir\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842871 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-var-lib-kubelet\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.842885 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/377bb3bb-1c3d-4cc5-a159-2d116f464492-cnibin\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843402 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-run-netns\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843427 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-cnibin\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843449 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/128f4e85-fd17-4281-97d2-872fda792b21-proxy-tls\") pod \"machine-config-daemon-lkgrl\" (UID: \"128f4e85-fd17-4281-97d2-872fda792b21\") " pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843472 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-systemd-units\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843490 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-ovnkube-config\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843508 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-run-netns\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843527 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg9cs\" (UniqueName: \"kubernetes.io/projected/377bb3bb-1c3d-4cc5-a159-2d116f464492-kube-api-access-lg9cs\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843549 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-var-lib-openvswitch\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843599 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-ovn\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843635 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-multus-socket-dir-parent\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843666 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-multus-cni-dir\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843716 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-hostroot\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843748 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/377bb3bb-1c3d-4cc5-a159-2d116f464492-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843771 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-run-k8s-cni-cncf-io\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843790 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-var-lib-cni-bin\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843828 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-ovnkube-script-lib\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843851 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-system-cni-dir\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843874 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-var-lib-cni-multus\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843894 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-cni-netd\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843915 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9532a098-7e41-454c-af48-44f9a9478d12-multus-daemon-config\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843936 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-run-multus-certs\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.843967 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/377bb3bb-1c3d-4cc5-a159-2d116f464492-cni-binary-copy\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.844005 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/128f4e85-fd17-4281-97d2-872fda792b21-rootfs\") pod \"machine-config-daemon-lkgrl\" (UID: \"128f4e85-fd17-4281-97d2-872fda792b21\") " pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.844027 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59p29\" (UniqueName: \"kubernetes.io/projected/128f4e85-fd17-4281-97d2-872fda792b21-kube-api-access-59p29\") pod \"machine-config-daemon-lkgrl\" (UID: \"128f4e85-fd17-4281-97d2-872fda792b21\") " pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.896820 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.908916 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.922835 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.932049 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.940262 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 07:37:29.515595043 +0000 UTC Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.940605 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.944811 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-run-k8s-cni-cncf-io\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.944849 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-var-lib-cni-bin\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.944872 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-ovnkube-script-lib\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.944887 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-system-cni-dir\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.944902 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-var-lib-cni-multus\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.944917 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59p29\" (UniqueName: \"kubernetes.io/projected/128f4e85-fd17-4281-97d2-872fda792b21-kube-api-access-59p29\") pod \"machine-config-daemon-lkgrl\" (UID: \"128f4e85-fd17-4281-97d2-872fda792b21\") " pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.944932 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-cni-netd\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.944947 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9532a098-7e41-454c-af48-44f9a9478d12-multus-daemon-config\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.944963 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-run-multus-certs\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.944981 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/377bb3bb-1c3d-4cc5-a159-2d116f464492-cni-binary-copy\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.944998 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/128f4e85-fd17-4281-97d2-872fda792b21-rootfs\") pod \"machine-config-daemon-lkgrl\" (UID: \"128f4e85-fd17-4281-97d2-872fda792b21\") " pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945013 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rszb5\" (UniqueName: \"kubernetes.io/projected/9532a098-7e41-454c-af48-44f9a9478d12-kube-api-access-rszb5\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945028 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/377bb3bb-1c3d-4cc5-a159-2d116f464492-os-release\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945044 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-kubelet\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945058 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-slash\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945071 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-cni-bin\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945085 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/812f1f08-469d-44f4-907e-60ad61837364-ovn-node-metrics-cert\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945100 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9532a098-7e41-454c-af48-44f9a9478d12-cni-binary-copy\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945125 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xtl6\" (UniqueName: \"kubernetes.io/projected/6e28dd15-03ea-4c9f-94d0-7b953d0c4044-kube-api-access-8xtl6\") pod \"node-resolver-rd6dh\" (UID: \"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\") " pod="openshift-dns/node-resolver-rd6dh" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945155 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-etc-openvswitch\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945170 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/128f4e85-fd17-4281-97d2-872fda792b21-mcd-auth-proxy-config\") pod \"machine-config-daemon-lkgrl\" (UID: \"128f4e85-fd17-4281-97d2-872fda792b21\") " pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945192 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6e28dd15-03ea-4c9f-94d0-7b953d0c4044-hosts-file\") pod \"node-resolver-rd6dh\" (UID: \"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\") " pod="openshift-dns/node-resolver-rd6dh" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945209 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945224 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-env-overrides\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945240 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv4sj\" (UniqueName: \"kubernetes.io/projected/812f1f08-469d-44f4-907e-60ad61837364-kube-api-access-mv4sj\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945268 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-openvswitch\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945283 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-log-socket\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945296 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-run-ovn-kubernetes\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945310 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-multus-conf-dir\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945324 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-systemd\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945337 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-node-log\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945352 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/377bb3bb-1c3d-4cc5-a159-2d116f464492-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945357 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-system-cni-dir\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945367 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-os-release\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945444 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-etc-kubernetes\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945473 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/377bb3bb-1c3d-4cc5-a159-2d116f464492-system-cni-dir\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945505 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-var-lib-kubelet\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945537 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/377bb3bb-1c3d-4cc5-a159-2d116f464492-cnibin\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945563 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-run-netns\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945605 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-cnibin\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945554 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-cni-netd\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945627 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/128f4e85-fd17-4281-97d2-872fda792b21-proxy-tls\") pod \"machine-config-daemon-lkgrl\" (UID: \"128f4e85-fd17-4281-97d2-872fda792b21\") " pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945664 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-systemd-units\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945723 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-ovnkube-config\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945744 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-etc-kubernetes\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945752 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-run-netns\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.944960 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-run-k8s-cni-cncf-io\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945809 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-ovnkube-script-lib\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945812 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg9cs\" (UniqueName: \"kubernetes.io/projected/377bb3bb-1c3d-4cc5-a159-2d116f464492-kube-api-access-lg9cs\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945843 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-os-release\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945884 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-run-netns\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945888 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-run-netns\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945926 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/377bb3bb-1c3d-4cc5-a159-2d116f464492-cnibin\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945964 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-node-log\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945995 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-openvswitch\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946023 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-log-socket\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946059 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-run-ovn-kubernetes\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946087 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-multus-conf-dir\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946087 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-cnibin\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945679 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-var-lib-kubelet\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946110 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-var-lib-cni-bin\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946129 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-systemd\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.945724 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-var-lib-cni-multus\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946163 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/377bb3bb-1c3d-4cc5-a159-2d116f464492-os-release\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946196 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-slash\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946204 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946232 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-cni-bin\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946453 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-host-run-multus-certs\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946523 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/128f4e85-fd17-4281-97d2-872fda792b21-rootfs\") pod \"machine-config-daemon-lkgrl\" (UID: \"128f4e85-fd17-4281-97d2-872fda792b21\") " pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946611 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-var-lib-openvswitch\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946642 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-ovn\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946668 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-multus-socket-dir-parent\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946758 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-multus-cni-dir\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946472 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/377bb3bb-1c3d-4cc5-a159-2d116f464492-system-cni-dir\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946789 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-hostroot\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946803 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-env-overrides\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946809 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9532a098-7e41-454c-af48-44f9a9478d12-multus-daemon-config\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946831 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-systemd-units\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.946816 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/377bb3bb-1c3d-4cc5-a159-2d116f464492-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.947101 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/128f4e85-fd17-4281-97d2-872fda792b21-mcd-auth-proxy-config\") pod \"machine-config-daemon-lkgrl\" (UID: \"128f4e85-fd17-4281-97d2-872fda792b21\") " pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.947142 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-etc-openvswitch\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.947155 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6e28dd15-03ea-4c9f-94d0-7b953d0c4044-hosts-file\") pod \"node-resolver-rd6dh\" (UID: \"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\") " pod="openshift-dns/node-resolver-rd6dh" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.947164 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-ovn\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.947183 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-kubelet\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.947187 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/377bb3bb-1c3d-4cc5-a159-2d116f464492-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.947227 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-hostroot\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.947268 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-multus-socket-dir-parent\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.947280 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-var-lib-openvswitch\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.947302 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9532a098-7e41-454c-af48-44f9a9478d12-multus-cni-dir\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.947302 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9532a098-7e41-454c-af48-44f9a9478d12-cni-binary-copy\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.947433 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-ovnkube-config\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.947450 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/377bb3bb-1c3d-4cc5-a159-2d116f464492-cni-binary-copy\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.949932 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/377bb3bb-1c3d-4cc5-a159-2d116f464492-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.951456 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/812f1f08-469d-44f4-907e-60ad61837364-ovn-node-metrics-cert\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.952089 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/128f4e85-fd17-4281-97d2-872fda792b21-proxy-tls\") pod \"machine-config-daemon-lkgrl\" (UID: \"128f4e85-fd17-4281-97d2-872fda792b21\") " pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.953755 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.963891 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg9cs\" (UniqueName: \"kubernetes.io/projected/377bb3bb-1c3d-4cc5-a159-2d116f464492-kube-api-access-lg9cs\") pod \"multus-additional-cni-plugins-8h8ld\" (UID: \"377bb3bb-1c3d-4cc5-a159-2d116f464492\") " pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.967783 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.968891 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rszb5\" (UniqueName: \"kubernetes.io/projected/9532a098-7e41-454c-af48-44f9a9478d12-kube-api-access-rszb5\") pod \"multus-5qvbt\" (UID: \"9532a098-7e41-454c-af48-44f9a9478d12\") " pod="openshift-multus/multus-5qvbt" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.973102 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xtl6\" (UniqueName: \"kubernetes.io/projected/6e28dd15-03ea-4c9f-94d0-7b953d0c4044-kube-api-access-8xtl6\") pod \"node-resolver-rd6dh\" (UID: \"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\") " pod="openshift-dns/node-resolver-rd6dh" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.973383 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv4sj\" (UniqueName: \"kubernetes.io/projected/812f1f08-469d-44f4-907e-60ad61837364-kube-api-access-mv4sj\") pod \"ovnkube-node-h9hsp\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.976379 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59p29\" (UniqueName: \"kubernetes.io/projected/128f4e85-fd17-4281-97d2-872fda792b21-kube-api-access-59p29\") pod \"machine-config-daemon-lkgrl\" (UID: \"128f4e85-fd17-4281-97d2-872fda792b21\") " pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.982235 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.982284 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.982353 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.982452 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.982253 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:08 crc kubenswrapper[4797]: E0216 11:07:08.982520 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.985322 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:08 crc kubenswrapper[4797]: I0216 11:07:08.995341 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.005277 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.017677 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.028803 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.041729 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b35f505e0d6cd8526da203d9acab34b661d4396cbb286ac40e12fc12434303db\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:06:50Z\\\",\\\"message\\\":\\\"W0216 11:06:49.448765 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0216 11:06:49.449081 1 crypto.go:601] Generating new CA for check-endpoints-signer@1771240009 cert, and key in /tmp/serving-cert-471907346/serving-signer.crt, /tmp/serving-cert-471907346/serving-signer.key\\\\nI0216 11:06:50.055447 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:06:50.059847 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0216 11:06:50.060026 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:06:50.067139 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-471907346/tls.crt::/tmp/serving-cert-471907346/tls.key\\\\\\\"\\\\nF0216 11:06:50.269132 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.052258 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.068929 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.079398 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.092688 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.105751 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.120876 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.122073 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.123311 4797 scope.go:117] "RemoveContainer" containerID="cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1" Feb 16 11:07:09 crc kubenswrapper[4797]: E0216 11:07:09.123460 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.138398 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.142794 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.151562 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5qvbt" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.157097 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.159204 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-rd6dh" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.165960 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.170961 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.172707 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: W0216 11:07:09.178116 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9532a098_7e41_454c_af48_44f9a9478d12.slice/crio-d4d6f17377b1a444a2cc02822d125b63413305a2c505fb2841d5fb5a3aedbdb2 WatchSource:0}: Error finding container d4d6f17377b1a444a2cc02822d125b63413305a2c505fb2841d5fb5a3aedbdb2: Status 404 returned error can't find the container with id d4d6f17377b1a444a2cc02822d125b63413305a2c505fb2841d5fb5a3aedbdb2 Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.189423 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: W0216 11:07:09.193376 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod377bb3bb_1c3d_4cc5_a159_2d116f464492.slice/crio-31f391a3237565939fa17bca06a7caf634b27db2e9c1213a38848c0ad826e4e6 WatchSource:0}: Error finding container 31f391a3237565939fa17bca06a7caf634b27db2e9c1213a38848c0ad826e4e6: Status 404 returned error can't find the container with id 31f391a3237565939fa17bca06a7caf634b27db2e9c1213a38848c0ad826e4e6 Feb 16 11:07:09 crc kubenswrapper[4797]: W0216 11:07:09.197386 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod812f1f08_469d_44f4_907e_60ad61837364.slice/crio-5b4e9a95230f56894a97e64e3f2a1083fbf3fc3c7debb2d6da8f8150dcc08672 WatchSource:0}: Error finding container 5b4e9a95230f56894a97e64e3f2a1083fbf3fc3c7debb2d6da8f8150dcc08672: Status 404 returned error can't find the container with id 5b4e9a95230f56894a97e64e3f2a1083fbf3fc3c7debb2d6da8f8150dcc08672 Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.226114 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.263814 4797 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 11:02:08 +0000 UTC, rotation deadline is 2026-12-17 14:14:49.782325637 +0000 UTC Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.263878 4797 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7299h7m40.518449729s for next certificate rotation Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.274771 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.299666 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.322847 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.341673 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.355776 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.368108 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.378762 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.389081 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.404748 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:09Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:09 crc kubenswrapper[4797]: I0216 11:07:09.941285 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:13:36.587273498 +0000 UTC Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.127366 4797 generic.go:334] "Generic (PLEG): container finished" podID="377bb3bb-1c3d-4cc5-a159-2d116f464492" containerID="a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a" exitCode=0 Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.127423 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" event={"ID":"377bb3bb-1c3d-4cc5-a159-2d116f464492","Type":"ContainerDied","Data":"a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a"} Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.127449 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" event={"ID":"377bb3bb-1c3d-4cc5-a159-2d116f464492","Type":"ContainerStarted","Data":"31f391a3237565939fa17bca06a7caf634b27db2e9c1213a38848c0ad826e4e6"} Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.129233 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5qvbt" event={"ID":"9532a098-7e41-454c-af48-44f9a9478d12","Type":"ContainerStarted","Data":"c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5"} Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.129296 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5qvbt" event={"ID":"9532a098-7e41-454c-af48-44f9a9478d12","Type":"ContainerStarted","Data":"d4d6f17377b1a444a2cc02822d125b63413305a2c505fb2841d5fb5a3aedbdb2"} Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.130625 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999"} Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.131909 4797 generic.go:334] "Generic (PLEG): container finished" podID="812f1f08-469d-44f4-907e-60ad61837364" containerID="8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438" exitCode=0 Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.131966 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438"} Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.131993 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerStarted","Data":"5b4e9a95230f56894a97e64e3f2a1083fbf3fc3c7debb2d6da8f8150dcc08672"} Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.134534 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rd6dh" event={"ID":"6e28dd15-03ea-4c9f-94d0-7b953d0c4044","Type":"ContainerStarted","Data":"4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d"} Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.134606 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-rd6dh" event={"ID":"6e28dd15-03ea-4c9f-94d0-7b953d0c4044","Type":"ContainerStarted","Data":"43d369720c2723d22975ec97dbe706197e7b9de804c647ec83d832c09a22f0f9"} Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.137238 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52"} Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.137279 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5"} Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.137295 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"25d97dd51fad2606721722c41e5d6a44bed429d489f20c58fc481775f27c3ded"} Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.149127 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.162454 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.182856 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.207837 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.224031 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.236377 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.258443 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.271674 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.281967 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.295692 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.319873 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.330234 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.352997 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.363528 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.373054 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.386162 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.422067 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.434467 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.448094 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.469413 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.503258 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.546694 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.561654 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.561765 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.561909 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.561939 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:07:14.561920238 +0000 UTC m=+29.282105218 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.562009 4797 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.562061 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:14.562047841 +0000 UTC m=+29.282232821 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.575236 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.590034 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.613016 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:10Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.662514 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.662559 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.662613 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.662660 4797 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.662729 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:14.662712404 +0000 UTC m=+29.382897384 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.662731 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.662751 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.662762 4797 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.662811 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:14.662794506 +0000 UTC m=+29.382979566 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.662832 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.662866 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.662878 4797 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.662933 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:14.662916038 +0000 UTC m=+29.383101148 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.942152 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 15:48:55.362869655 +0000 UTC Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.981735 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.982272 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.981849 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.982371 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:10 crc kubenswrapper[4797]: I0216 11:07:10.981762 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:10 crc kubenswrapper[4797]: E0216 11:07:10.982458 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.155109 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerStarted","Data":"02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7"} Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.155154 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerStarted","Data":"cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e"} Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.155163 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerStarted","Data":"d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217"} Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.155173 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerStarted","Data":"3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be"} Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.155181 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerStarted","Data":"2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f"} Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.156973 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" event={"ID":"377bb3bb-1c3d-4cc5-a159-2d116f464492","Type":"ContainerStarted","Data":"6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2"} Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.167615 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.180555 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.200315 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.212158 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.224446 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.234528 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.248336 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.254971 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-77slb"] Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.255307 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-77slb" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.256619 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.257701 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.258963 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.259347 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.261626 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.273496 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1b86971c-f0fb-492a-ade1-9535933f5d2b-serviceca\") pod \"node-ca-77slb\" (UID: \"1b86971c-f0fb-492a-ade1-9535933f5d2b\") " pod="openshift-image-registry/node-ca-77slb" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.273532 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-789z9\" (UniqueName: \"kubernetes.io/projected/1b86971c-f0fb-492a-ade1-9535933f5d2b-kube-api-access-789z9\") pod \"node-ca-77slb\" (UID: \"1b86971c-f0fb-492a-ade1-9535933f5d2b\") " pod="openshift-image-registry/node-ca-77slb" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.273620 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b86971c-f0fb-492a-ade1-9535933f5d2b-host\") pod \"node-ca-77slb\" (UID: \"1b86971c-f0fb-492a-ade1-9535933f5d2b\") " pod="openshift-image-registry/node-ca-77slb" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.279110 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.292967 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.307162 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.320130 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.341991 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.353551 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.369236 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.374755 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1b86971c-f0fb-492a-ade1-9535933f5d2b-serviceca\") pod \"node-ca-77slb\" (UID: \"1b86971c-f0fb-492a-ade1-9535933f5d2b\") " pod="openshift-image-registry/node-ca-77slb" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.374816 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-789z9\" (UniqueName: \"kubernetes.io/projected/1b86971c-f0fb-492a-ade1-9535933f5d2b-kube-api-access-789z9\") pod \"node-ca-77slb\" (UID: \"1b86971c-f0fb-492a-ade1-9535933f5d2b\") " pod="openshift-image-registry/node-ca-77slb" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.375084 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b86971c-f0fb-492a-ade1-9535933f5d2b-host\") pod \"node-ca-77slb\" (UID: \"1b86971c-f0fb-492a-ade1-9535933f5d2b\") " pod="openshift-image-registry/node-ca-77slb" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.375126 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1b86971c-f0fb-492a-ade1-9535933f5d2b-host\") pod \"node-ca-77slb\" (UID: \"1b86971c-f0fb-492a-ade1-9535933f5d2b\") " pod="openshift-image-registry/node-ca-77slb" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.375733 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/1b86971c-f0fb-492a-ade1-9535933f5d2b-serviceca\") pod \"node-ca-77slb\" (UID: \"1b86971c-f0fb-492a-ade1-9535933f5d2b\") " pod="openshift-image-registry/node-ca-77slb" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.379602 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.394897 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.408697 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-789z9\" (UniqueName: \"kubernetes.io/projected/1b86971c-f0fb-492a-ade1-9535933f5d2b-kube-api-access-789z9\") pod \"node-ca-77slb\" (UID: \"1b86971c-f0fb-492a-ade1-9535933f5d2b\") " pod="openshift-image-registry/node-ca-77slb" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.409247 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.426312 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.442853 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.455791 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.468728 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.479715 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.495895 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.511834 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.522773 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.543897 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.568085 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-77slb" Feb 16 11:07:11 crc kubenswrapper[4797]: I0216 11:07:11.942771 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 18:08:10.53418948 +0000 UTC Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.165263 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerStarted","Data":"219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a"} Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.166866 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-77slb" event={"ID":"1b86971c-f0fb-492a-ade1-9535933f5d2b","Type":"ContainerStarted","Data":"2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c"} Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.166905 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-77slb" event={"ID":"1b86971c-f0fb-492a-ade1-9535933f5d2b","Type":"ContainerStarted","Data":"dd9a9f300fae2e06c78df9135dd4f3d5179cfb874b3d78426a59fe1101c70514"} Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.169086 4797 generic.go:334] "Generic (PLEG): container finished" podID="377bb3bb-1c3d-4cc5-a159-2d116f464492" containerID="6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2" exitCode=0 Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.169131 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" event={"ID":"377bb3bb-1c3d-4cc5-a159-2d116f464492","Type":"ContainerDied","Data":"6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2"} Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.182037 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.195858 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.207777 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.220266 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.234938 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.249742 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.263446 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.278307 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.294770 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.311748 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.326717 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.347476 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.359462 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.369384 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.382045 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.394017 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.407312 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.420703 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.432431 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.445016 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.458481 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.470435 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.481241 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.495594 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.508965 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.519174 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.535790 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.546681 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.943616 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 23:14:37.305847407 +0000 UTC Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.982001 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.982071 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:12 crc kubenswrapper[4797]: I0216 11:07:12.982001 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:12 crc kubenswrapper[4797]: E0216 11:07:12.982246 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:12 crc kubenswrapper[4797]: E0216 11:07:12.982438 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:12 crc kubenswrapper[4797]: E0216 11:07:12.982563 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.205142 4797 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.207675 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.207713 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.207727 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.207836 4797 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.220153 4797 generic.go:334] "Generic (PLEG): container finished" podID="377bb3bb-1c3d-4cc5-a159-2d116f464492" containerID="9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028" exitCode=0 Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.220212 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" event={"ID":"377bb3bb-1c3d-4cc5-a159-2d116f464492","Type":"ContainerDied","Data":"9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028"} Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.233473 4797 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.233821 4797 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.234937 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.234980 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.234996 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.235015 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.235028 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:13Z","lastTransitionTime":"2026-02-16T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.250746 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.272127 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: E0216 11:07:13.272726 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.276782 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.276837 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.276852 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.276887 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.276900 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:13Z","lastTransitionTime":"2026-02-16T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.297127 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: E0216 11:07:13.297139 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.304263 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.304297 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.304305 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.304318 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.304327 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:13Z","lastTransitionTime":"2026-02-16T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.316278 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: E0216 11:07:13.321007 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.324497 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.324538 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.324548 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.324561 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.324571 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:13Z","lastTransitionTime":"2026-02-16T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.334819 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: E0216 11:07:13.338358 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.346823 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.346864 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.346872 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.346886 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.346896 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:13Z","lastTransitionTime":"2026-02-16T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.351852 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: E0216 11:07:13.360865 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: E0216 11:07:13.361039 4797 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.366391 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.366426 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.366436 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.366450 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.366459 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:13Z","lastTransitionTime":"2026-02-16T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.373107 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.383510 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.401049 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.413480 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.424746 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.435166 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.444841 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.456049 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:13Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.477367 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.477395 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.477404 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.477416 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.477424 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:13Z","lastTransitionTime":"2026-02-16T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.579671 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.579712 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.579737 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.579755 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.579766 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:13Z","lastTransitionTime":"2026-02-16T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.682244 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.682299 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.682313 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.682333 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.682355 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:13Z","lastTransitionTime":"2026-02-16T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.784462 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.784495 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.784503 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.784516 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.784525 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:13Z","lastTransitionTime":"2026-02-16T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.887169 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.887674 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.887688 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.887713 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.887723 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:13Z","lastTransitionTime":"2026-02-16T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.943960 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 09:30:23.161359726 +0000 UTC Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.989732 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.989771 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.989782 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.989793 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:13 crc kubenswrapper[4797]: I0216 11:07:13.989802 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:13Z","lastTransitionTime":"2026-02-16T11:07:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.091992 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.092040 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.092053 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.092070 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.092083 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:14Z","lastTransitionTime":"2026-02-16T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.195124 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.195177 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.195191 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.195212 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.195226 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:14Z","lastTransitionTime":"2026-02-16T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.228530 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerStarted","Data":"8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869"} Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.232533 4797 generic.go:334] "Generic (PLEG): container finished" podID="377bb3bb-1c3d-4cc5-a159-2d116f464492" containerID="e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba" exitCode=0 Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.232633 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" event={"ID":"377bb3bb-1c3d-4cc5-a159-2d116f464492","Type":"ContainerDied","Data":"e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba"} Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.253420 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.270740 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.286922 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.297905 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.297938 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.297947 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.297965 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.297974 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:14Z","lastTransitionTime":"2026-02-16T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.300503 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.312435 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.327241 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.341599 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.352967 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.373831 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.388645 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.400624 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.400970 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.401013 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.401026 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.401043 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.401054 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:14Z","lastTransitionTime":"2026-02-16T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.415000 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.426338 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.438859 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:14Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.503310 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.503345 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.503355 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.503383 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.503394 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:14Z","lastTransitionTime":"2026-02-16T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.606480 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.606526 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.606536 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.606551 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.606564 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:14Z","lastTransitionTime":"2026-02-16T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.650363 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.650674 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.650767 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:07:22.650737385 +0000 UTC m=+37.370922405 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.650819 4797 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.650929 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:22.650907028 +0000 UTC m=+37.371092028 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.709394 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.709458 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.709474 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.709495 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.709509 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:14Z","lastTransitionTime":"2026-02-16T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.752176 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.752237 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.752275 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.752383 4797 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.752445 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.752452 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.752470 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.752488 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.752511 4797 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.752490 4797 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.752543 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:22.752504562 +0000 UTC m=+37.472689722 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.752620 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:22.752607405 +0000 UTC m=+37.472792535 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.752747 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:22.752686036 +0000 UTC m=+37.472871186 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.811839 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.811908 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.811932 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.811963 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.811985 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:14Z","lastTransitionTime":"2026-02-16T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.915760 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.915803 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.915815 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.915832 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.915843 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:14Z","lastTransitionTime":"2026-02-16T11:07:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.946197 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 01:39:55.013954142 +0000 UTC Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.982172 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.982358 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.982781 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:14 crc kubenswrapper[4797]: I0216 11:07:14.982860 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.983108 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:14 crc kubenswrapper[4797]: E0216 11:07:14.983112 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.019177 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.019245 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.019271 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.019304 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.019327 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:15Z","lastTransitionTime":"2026-02-16T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.121341 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.121384 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.121394 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.121412 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.121421 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:15Z","lastTransitionTime":"2026-02-16T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.224365 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.224403 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.224411 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.224425 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.224436 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:15Z","lastTransitionTime":"2026-02-16T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.242186 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" event={"ID":"377bb3bb-1c3d-4cc5-a159-2d116f464492","Type":"ContainerStarted","Data":"80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c"} Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.257185 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.275926 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.293991 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.307500 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.321323 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.327181 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.327217 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.327229 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.327244 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.327255 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:15Z","lastTransitionTime":"2026-02-16T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.333666 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.345329 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.362834 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.381803 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.398839 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.412030 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.426450 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.430326 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.430364 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.430379 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.430395 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.430411 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:15Z","lastTransitionTime":"2026-02-16T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.446995 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.462786 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.532815 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.532870 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.532880 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.532894 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.532911 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:15Z","lastTransitionTime":"2026-02-16T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.636724 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.636769 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.636790 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.636822 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.636846 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:15Z","lastTransitionTime":"2026-02-16T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.739617 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.739683 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.739698 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.739715 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.739731 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:15Z","lastTransitionTime":"2026-02-16T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.765887 4797 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.846379 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.846911 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.846931 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.846955 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.846975 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:15Z","lastTransitionTime":"2026-02-16T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.947233 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 00:54:52.205032405 +0000 UTC Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.949722 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.949769 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.949781 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.949797 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.949807 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:15Z","lastTransitionTime":"2026-02-16T11:07:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:15 crc kubenswrapper[4797]: I0216 11:07:15.996971 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.012075 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.024307 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.035117 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.048436 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.052907 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.052963 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.052979 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.052999 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.053013 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:16Z","lastTransitionTime":"2026-02-16T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.065662 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.081996 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.095274 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.110171 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.124822 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.137650 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.155076 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.155122 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.155134 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.155151 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.155165 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:16Z","lastTransitionTime":"2026-02-16T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.157937 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.168722 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.181278 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.283486 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.283529 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.283548 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.283564 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.283595 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:16Z","lastTransitionTime":"2026-02-16T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.290728 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerStarted","Data":"8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7"} Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.291280 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.317188 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.331646 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.337150 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.349969 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.368179 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.379060 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.386211 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.386330 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.386392 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.386451 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.386513 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:16Z","lastTransitionTime":"2026-02-16T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.392367 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.407216 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.417466 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.438997 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.450281 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.466264 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.489428 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.489477 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.489489 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.489507 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.489519 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:16Z","lastTransitionTime":"2026-02-16T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.506401 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.535208 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.550494 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.580203 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.591624 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.602197 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.611695 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.611730 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.611739 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.611753 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.611762 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:16Z","lastTransitionTime":"2026-02-16T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.612617 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.626612 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.638771 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.654140 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.682416 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.695553 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.707164 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.714024 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.714070 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.714080 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.714097 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.714111 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:16Z","lastTransitionTime":"2026-02-16T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.760251 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.772758 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.784286 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.800526 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.816338 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.816374 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.816384 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.816398 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.816409 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:16Z","lastTransitionTime":"2026-02-16T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.919613 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.919666 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.919679 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.919697 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.919709 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:16Z","lastTransitionTime":"2026-02-16T11:07:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.947611 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:30:42.67585519 +0000 UTC Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.981882 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.981986 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:16 crc kubenswrapper[4797]: E0216 11:07:16.982019 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:16 crc kubenswrapper[4797]: I0216 11:07:16.981986 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:16 crc kubenswrapper[4797]: E0216 11:07:16.982183 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:16 crc kubenswrapper[4797]: E0216 11:07:16.982264 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.022640 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.022699 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.022711 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.022728 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.022740 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:17Z","lastTransitionTime":"2026-02-16T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.126120 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.126158 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.126169 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.126183 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.126193 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:17Z","lastTransitionTime":"2026-02-16T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.228165 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.228346 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.228414 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.228482 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.228607 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:17Z","lastTransitionTime":"2026-02-16T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.296660 4797 generic.go:334] "Generic (PLEG): container finished" podID="377bb3bb-1c3d-4cc5-a159-2d116f464492" containerID="80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c" exitCode=0 Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.296763 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" event={"ID":"377bb3bb-1c3d-4cc5-a159-2d116f464492","Type":"ContainerDied","Data":"80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c"} Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.296817 4797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.297373 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.316180 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.321498 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.331289 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.331655 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.331690 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.331699 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.331712 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.331721 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:17Z","lastTransitionTime":"2026-02-16T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.348785 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.358801 4797 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.360012 4797 scope.go:117] "RemoveContainer" containerID="cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1" Feb 16 11:07:17 crc kubenswrapper[4797]: E0216 11:07:17.360159 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.360440 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.373961 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.383421 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.395026 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.408367 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.419885 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.434251 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.435035 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.435068 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.435076 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.435089 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.435099 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:17Z","lastTransitionTime":"2026-02-16T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.452123 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.467844 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.482872 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.495708 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.511216 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.526141 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.542300 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.542338 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.542350 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.542365 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.542375 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:17Z","lastTransitionTime":"2026-02-16T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.544105 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.558350 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.572892 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.588121 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.601166 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.616131 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.650238 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.650298 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.650312 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.650330 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.650343 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:17Z","lastTransitionTime":"2026-02-16T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.650434 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.664907 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.678040 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.692418 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.705439 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.722818 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:17Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.752538 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.752603 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.752614 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.752632 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.752644 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:17Z","lastTransitionTime":"2026-02-16T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.856207 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.856608 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.856618 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.856634 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.856656 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:17Z","lastTransitionTime":"2026-02-16T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.947789 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 16:06:08.237623078 +0000 UTC Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.959018 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.959083 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.959097 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.959125 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:17 crc kubenswrapper[4797]: I0216 11:07:17.959141 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:17Z","lastTransitionTime":"2026-02-16T11:07:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.062147 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.062189 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.062202 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.062218 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.062230 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:18Z","lastTransitionTime":"2026-02-16T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.167674 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.167719 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.167733 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.167749 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.167762 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:18Z","lastTransitionTime":"2026-02-16T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.270115 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.270146 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.270155 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.270167 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.270177 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:18Z","lastTransitionTime":"2026-02-16T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.302455 4797 generic.go:334] "Generic (PLEG): container finished" podID="377bb3bb-1c3d-4cc5-a159-2d116f464492" containerID="3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91" exitCode=0 Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.302626 4797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.303399 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" event={"ID":"377bb3bb-1c3d-4cc5-a159-2d116f464492","Type":"ContainerDied","Data":"3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91"} Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.317755 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.331282 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.347034 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.363030 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.372368 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.372393 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.372401 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.372414 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.372422 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:18Z","lastTransitionTime":"2026-02-16T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.379089 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.392526 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.413026 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.430992 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.443102 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.455706 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.469146 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.477698 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.477738 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.477749 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.477764 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.477776 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:18Z","lastTransitionTime":"2026-02-16T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.480269 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.492405 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.506385 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:18Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.580625 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.580670 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.580680 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.580693 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.580703 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:18Z","lastTransitionTime":"2026-02-16T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.683220 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.683272 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.683285 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.683303 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.683315 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:18Z","lastTransitionTime":"2026-02-16T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.788254 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.788311 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.788330 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.788355 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.788373 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:18Z","lastTransitionTime":"2026-02-16T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.891673 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.891736 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.891748 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.891772 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.891788 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:18Z","lastTransitionTime":"2026-02-16T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:18 crc kubenswrapper[4797]: I0216 11:07:18.948276 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 18:44:21.531894691 +0000 UTC Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:18.981966 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:18.982017 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:18.982035 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:19 crc kubenswrapper[4797]: E0216 11:07:18.982335 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:19 crc kubenswrapper[4797]: E0216 11:07:18.982427 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:18.994398 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:18.994422 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:18.994434 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:18.994450 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:18.994465 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:18Z","lastTransitionTime":"2026-02-16T11:07:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:19 crc kubenswrapper[4797]: E0216 11:07:19.057121 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.097335 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.097369 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.097378 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.097391 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.097400 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:19Z","lastTransitionTime":"2026-02-16T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.200449 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.200506 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.200518 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.200540 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.200553 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:19Z","lastTransitionTime":"2026-02-16T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.303815 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.303863 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.303875 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.303891 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.303902 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:19Z","lastTransitionTime":"2026-02-16T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.313927 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" event={"ID":"377bb3bb-1c3d-4cc5-a159-2d116f464492","Type":"ContainerStarted","Data":"32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67"} Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.314061 4797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.347513 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.363963 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.385011 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.402208 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.407074 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.407147 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.407168 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.407196 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.407210 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:19Z","lastTransitionTime":"2026-02-16T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.417522 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.440988 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.458070 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.473385 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.495664 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.509783 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.510395 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.510429 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.510439 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.510458 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.510469 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:19Z","lastTransitionTime":"2026-02-16T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.527945 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.540303 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.552265 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.569342 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:19Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.613505 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.613626 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.613644 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.613671 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.613688 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:19Z","lastTransitionTime":"2026-02-16T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.749953 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.750059 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.750083 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.750139 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.750160 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:19Z","lastTransitionTime":"2026-02-16T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.855470 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.855520 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.855536 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.855562 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.855611 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:19Z","lastTransitionTime":"2026-02-16T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.950550 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 17:50:30.078618848 +0000 UTC Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.959517 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.959567 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.959598 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.959621 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:19 crc kubenswrapper[4797]: I0216 11:07:19.959636 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:19Z","lastTransitionTime":"2026-02-16T11:07:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.063178 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.063242 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.063256 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.063275 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.063289 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:20Z","lastTransitionTime":"2026-02-16T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.166719 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.166771 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.166788 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.166811 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.166831 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:20Z","lastTransitionTime":"2026-02-16T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.269701 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.269782 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.269807 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.269844 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.269866 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:20Z","lastTransitionTime":"2026-02-16T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.372988 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.373069 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.373094 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.373129 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.373155 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:20Z","lastTransitionTime":"2026-02-16T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.476527 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.476636 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.476668 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.476706 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.476730 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:20Z","lastTransitionTime":"2026-02-16T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.578551 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.578607 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.578622 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.578637 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.578649 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:20Z","lastTransitionTime":"2026-02-16T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.681446 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.681543 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.681557 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.681623 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.681643 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:20Z","lastTransitionTime":"2026-02-16T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.784107 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.784147 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.784157 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.784171 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.784182 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:20Z","lastTransitionTime":"2026-02-16T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.886986 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.887054 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.887079 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.887111 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.887132 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:20Z","lastTransitionTime":"2026-02-16T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.951801 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:04:51.59476183 +0000 UTC Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.982427 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.982457 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:20 crc kubenswrapper[4797]: E0216 11:07:20.982600 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:20 crc kubenswrapper[4797]: E0216 11:07:20.982716 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.982779 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:20 crc kubenswrapper[4797]: E0216 11:07:20.982849 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.989260 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.989305 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.989316 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.989335 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:20 crc kubenswrapper[4797]: I0216 11:07:20.989347 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:20Z","lastTransitionTime":"2026-02-16T11:07:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.092542 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.092603 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.092615 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.092632 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.092642 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:21Z","lastTransitionTime":"2026-02-16T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.194886 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.194932 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.194943 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.194959 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.194970 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:21Z","lastTransitionTime":"2026-02-16T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.297622 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.297685 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.297717 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.297748 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.297771 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:21Z","lastTransitionTime":"2026-02-16T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.401503 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.401612 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.401645 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.401673 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.401699 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:21Z","lastTransitionTime":"2026-02-16T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.504863 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.504903 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.504911 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.504925 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.504934 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:21Z","lastTransitionTime":"2026-02-16T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.576086 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm"] Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.576529 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.579686 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.580179 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.591362 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.608085 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.608128 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.608143 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.608164 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.608185 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:21Z","lastTransitionTime":"2026-02-16T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.615108 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.636258 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.650470 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.663608 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.682231 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.696044 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.707048 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.711033 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.711079 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.711090 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.711106 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.711117 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:21Z","lastTransitionTime":"2026-02-16T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.721679 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.733346 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vnjnm\" (UID: \"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.733458 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fxq6\" (UniqueName: \"kubernetes.io/projected/ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae-kube-api-access-2fxq6\") pod \"ovnkube-control-plane-749d76644c-vnjnm\" (UID: \"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.733495 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vnjnm\" (UID: \"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.733625 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vnjnm\" (UID: \"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.739119 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.767887 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.827652 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.827688 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.827696 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.827710 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.827720 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:21Z","lastTransitionTime":"2026-02-16T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.834907 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vnjnm\" (UID: \"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.834955 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fxq6\" (UniqueName: \"kubernetes.io/projected/ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae-kube-api-access-2fxq6\") pod \"ovnkube-control-plane-749d76644c-vnjnm\" (UID: \"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.834990 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vnjnm\" (UID: \"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.835050 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vnjnm\" (UID: \"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.835606 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae-env-overrides\") pod \"ovnkube-control-plane-749d76644c-vnjnm\" (UID: \"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.836092 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-vnjnm\" (UID: \"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.841475 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-vnjnm\" (UID: \"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.843564 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.858138 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fxq6\" (UniqueName: \"kubernetes.io/projected/ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae-kube-api-access-2fxq6\") pod \"ovnkube-control-plane-749d76644c-vnjnm\" (UID: \"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.859074 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.883513 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.899615 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.905246 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:21Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:21 crc kubenswrapper[4797]: W0216 11:07:21.911725 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac7fc57b_ad0c_4b7c_b65c_6f930a3d66ae.slice/crio-101f57f537ab10fcfcae2697d2da9e52c327fc788d091792f1050e3ac34ae5d4 WatchSource:0}: Error finding container 101f57f537ab10fcfcae2697d2da9e52c327fc788d091792f1050e3ac34ae5d4: Status 404 returned error can't find the container with id 101f57f537ab10fcfcae2697d2da9e52c327fc788d091792f1050e3ac34ae5d4 Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.930930 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.930963 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.930974 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.930989 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.931000 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:21Z","lastTransitionTime":"2026-02-16T11:07:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:21 crc kubenswrapper[4797]: I0216 11:07:21.952163 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:00:50.544211426 +0000 UTC Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.034667 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.034697 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.034707 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.034720 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.034730 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:22Z","lastTransitionTime":"2026-02-16T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.137985 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.138027 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.138038 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.138056 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.138066 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:22Z","lastTransitionTime":"2026-02-16T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.240843 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.240878 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.240887 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.240901 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.240911 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:22Z","lastTransitionTime":"2026-02-16T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.324653 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" event={"ID":"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae","Type":"ContainerStarted","Data":"271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e"} Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.324734 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" event={"ID":"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae","Type":"ContainerStarted","Data":"101f57f537ab10fcfcae2697d2da9e52c327fc788d091792f1050e3ac34ae5d4"} Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.346625 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.346659 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.346667 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.346680 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.346691 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:22Z","lastTransitionTime":"2026-02-16T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.449091 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.449140 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.449153 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.449172 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.449186 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:22Z","lastTransitionTime":"2026-02-16T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.551758 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.551789 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.551797 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.551811 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.551820 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:22Z","lastTransitionTime":"2026-02-16T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.654701 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.654760 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.654769 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.654787 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.654798 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:22Z","lastTransitionTime":"2026-02-16T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.742761 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.742909 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.743053 4797 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.743104 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:07:38.743047661 +0000 UTC m=+53.463232641 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.743219 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:38.743209245 +0000 UTC m=+53.463394225 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.757152 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.757194 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.757203 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.757217 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.757226 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:22Z","lastTransitionTime":"2026-02-16T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.843838 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.843907 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.843937 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.844077 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.844091 4797 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.844148 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.844193 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.844207 4797 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.844100 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.844248 4797 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.844175 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:38.844157124 +0000 UTC m=+53.564342104 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.844286 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:38.844266626 +0000 UTC m=+53.564451676 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.844305 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:38.844294967 +0000 UTC m=+53.564480057 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.859775 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.859811 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.859821 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.859839 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.859848 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:22Z","lastTransitionTime":"2026-02-16T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.952371 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 16:02:29.636923174 +0000 UTC Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.962702 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.962747 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.962756 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.962771 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.962779 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:22Z","lastTransitionTime":"2026-02-16T11:07:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.982264 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.982477 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.982529 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.982685 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:22 crc kubenswrapper[4797]: I0216 11:07:22.982683 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:22 crc kubenswrapper[4797]: E0216 11:07:22.982810 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.042747 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-cglwk"] Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.043141 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:23 crc kubenswrapper[4797]: E0216 11:07:23.043205 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.057388 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.064891 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.064922 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.064931 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.064945 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.064955 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.067640 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.079754 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.096197 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.108707 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.123478 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.134623 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.146153 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.147433 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.147474 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g9vs\" (UniqueName: \"kubernetes.io/projected/1f19a4ae-a737-4818-82b5-db20cafd45c7-kube-api-access-2g9vs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.157717 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.167709 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.167737 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.167746 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.167760 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.167768 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.178170 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.191404 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.201850 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.214006 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.226521 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.239133 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.248290 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.248366 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g9vs\" (UniqueName: \"kubernetes.io/projected/1f19a4ae-a737-4818-82b5-db20cafd45c7-kube-api-access-2g9vs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:23 crc kubenswrapper[4797]: E0216 11:07:23.248903 4797 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:23 crc kubenswrapper[4797]: E0216 11:07:23.248981 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs podName:1f19a4ae-a737-4818-82b5-db20cafd45c7 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:23.748958863 +0000 UTC m=+38.469143883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs") pod "network-metrics-daemon-cglwk" (UID: "1f19a4ae-a737-4818-82b5-db20cafd45c7") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.253947 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.268665 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g9vs\" (UniqueName: \"kubernetes.io/projected/1f19a4ae-a737-4818-82b5-db20cafd45c7-kube-api-access-2g9vs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.270311 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.270345 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.270355 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.270368 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.270377 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.332800 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/0.log" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.336766 4797 generic.go:334] "Generic (PLEG): container finished" podID="812f1f08-469d-44f4-907e-60ad61837364" containerID="8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7" exitCode=1 Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.336850 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.337778 4797 scope.go:117] "RemoveContainer" containerID="8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.340255 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" event={"ID":"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae","Type":"ContainerStarted","Data":"6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.351239 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.367798 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.376373 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.376405 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.376414 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.376427 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.376436 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.380407 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.395449 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.414226 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.432702 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.445399 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.460196 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.474370 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.481418 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.481498 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.481517 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.481545 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.481563 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.489275 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.509669 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.526305 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.547202 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.583984 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.584029 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.584039 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.584054 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.584065 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.611819 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:22Z\\\",\\\"message\\\":\\\"r removal\\\\nI0216 11:07:22.655180 6076 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 11:07:22.655185 6076 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 11:07:22.655196 6076 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 11:07:22.655216 6076 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 11:07:22.655220 6076 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 11:07:22.655247 6076 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 11:07:22.655260 6076 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 11:07:22.655268 6076 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 11:07:22.655277 6076 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 11:07:22.655279 6076 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 11:07:22.655294 6076 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 11:07:22.655302 6076 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 11:07:22.655309 6076 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 11:07:22.655606 6076 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 11:07:22.655708 6076 factory.go:656] Stopping watch factory\\\\nI0216 11:07:22.655759 6076 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.625326 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.637721 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.650025 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.664531 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.667354 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.667412 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.667427 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.667447 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.667463 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.686755 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: E0216 11:07:23.694317 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.697548 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.697591 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.697602 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.697618 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.697629 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.700718 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: E0216 11:07:23.709258 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.713402 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.713407 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.713432 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.713515 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.713535 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.713547 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:23 crc kubenswrapper[4797]: E0216 11:07:23.727074 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.728282 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.732114 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.732154 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.732165 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.732178 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.732188 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.743487 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: E0216 11:07:23.745766 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.749298 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.749335 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.749344 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.749373 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.749384 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.753391 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:23 crc kubenswrapper[4797]: E0216 11:07:23.753544 4797 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:23 crc kubenswrapper[4797]: E0216 11:07:23.753657 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs podName:1f19a4ae-a737-4818-82b5-db20cafd45c7 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:24.753636338 +0000 UTC m=+39.473821368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs") pod "network-metrics-daemon-cglwk" (UID: "1f19a4ae-a737-4818-82b5-db20cafd45c7") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.756295 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: E0216 11:07:23.766204 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: E0216 11:07:23.766322 4797 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.766763 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.767986 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.768021 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.768037 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.768052 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.768062 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.790404 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:22Z\\\",\\\"message\\\":\\\"r removal\\\\nI0216 11:07:22.655180 6076 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 11:07:22.655185 6076 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 11:07:22.655196 6076 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 11:07:22.655216 6076 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 11:07:22.655220 6076 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 11:07:22.655247 6076 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 11:07:22.655260 6076 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 11:07:22.655268 6076 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 11:07:22.655277 6076 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 11:07:22.655279 6076 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 11:07:22.655294 6076 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 11:07:22.655302 6076 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 11:07:22.655309 6076 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 11:07:22.655606 6076 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 11:07:22.655708 6076 factory.go:656] Stopping watch factory\\\\nI0216 11:07:22.655759 6076 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.803755 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.816243 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.831419 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.854273 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.870210 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.870252 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.870263 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.870278 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.870290 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.872476 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.884086 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:23Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.952646 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 00:26:52.061480456 +0000 UTC Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.972357 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.972394 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.972404 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.972417 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:23 crc kubenswrapper[4797]: I0216 11:07:23.972428 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:23Z","lastTransitionTime":"2026-02-16T11:07:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.074456 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.074494 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.074509 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.074525 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.074536 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:24Z","lastTransitionTime":"2026-02-16T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.177022 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.177058 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.177069 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.177085 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.177095 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:24Z","lastTransitionTime":"2026-02-16T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.279339 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.279377 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.279387 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.279399 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.279408 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:24Z","lastTransitionTime":"2026-02-16T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.345027 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/1.log" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.345610 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/0.log" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.348298 4797 generic.go:334] "Generic (PLEG): container finished" podID="812f1f08-469d-44f4-907e-60ad61837364" containerID="99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862" exitCode=1 Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.348410 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862"} Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.348490 4797 scope.go:117] "RemoveContainer" containerID="8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.349347 4797 scope.go:117] "RemoveContainer" containerID="99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862" Feb 16 11:07:24 crc kubenswrapper[4797]: E0216 11:07:24.349548 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.372880 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.384794 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.384821 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.384829 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.384843 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.384853 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:24Z","lastTransitionTime":"2026-02-16T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.405927 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.430471 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d0d588fe55d31ad7c9e3b927e95389bc7de9c1e08ce9c9b1799603e0c032db7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:22Z\\\",\\\"message\\\":\\\"r removal\\\\nI0216 11:07:22.655180 6076 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 11:07:22.655185 6076 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 11:07:22.655196 6076 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 11:07:22.655216 6076 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 11:07:22.655220 6076 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 11:07:22.655247 6076 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 11:07:22.655260 6076 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 11:07:22.655268 6076 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 11:07:22.655277 6076 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 11:07:22.655279 6076 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 11:07:22.655294 6076 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 11:07:22.655302 6076 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 11:07:22.655309 6076 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 11:07:22.655606 6076 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 11:07:22.655708 6076 factory.go:656] Stopping watch factory\\\\nI0216 11:07:22.655759 6076 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:24Z\\\",\\\"message\\\":\\\"il,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0216 11:07:24.271850 6310 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0216 11:07:24.271826 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 11:07:24.271853 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0216 11:07:24.271852 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, h\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.447161 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.457168 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.467102 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.479786 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.487242 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.487282 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.487290 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.487302 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.487338 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:24Z","lastTransitionTime":"2026-02-16T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.492192 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.502858 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.513660 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.528670 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.541477 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.557946 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.570425 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.585822 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.589737 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.589789 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.589799 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.589819 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.589831 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:24Z","lastTransitionTime":"2026-02-16T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.599101 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:24Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.691858 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.691939 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.691963 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.691992 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.692017 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:24Z","lastTransitionTime":"2026-02-16T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.769047 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:24 crc kubenswrapper[4797]: E0216 11:07:24.769189 4797 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:24 crc kubenswrapper[4797]: E0216 11:07:24.769270 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs podName:1f19a4ae-a737-4818-82b5-db20cafd45c7 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:26.769251233 +0000 UTC m=+41.489436213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs") pod "network-metrics-daemon-cglwk" (UID: "1f19a4ae-a737-4818-82b5-db20cafd45c7") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.794834 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.794873 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.794882 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.794896 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.794906 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:24Z","lastTransitionTime":"2026-02-16T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.898179 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.898214 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.898221 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.898238 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.898248 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:24Z","lastTransitionTime":"2026-02-16T11:07:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.953143 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 20:14:06.802595721 +0000 UTC Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.981724 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.981790 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.981754 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:24 crc kubenswrapper[4797]: I0216 11:07:24.981733 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:24 crc kubenswrapper[4797]: E0216 11:07:24.981918 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:24 crc kubenswrapper[4797]: E0216 11:07:24.982030 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:24 crc kubenswrapper[4797]: E0216 11:07:24.982127 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:24 crc kubenswrapper[4797]: E0216 11:07:24.982179 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.000849 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.000889 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.000901 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.000918 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.000931 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:25Z","lastTransitionTime":"2026-02-16T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.103452 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.103487 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.103495 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.103508 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.103518 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:25Z","lastTransitionTime":"2026-02-16T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.205723 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.205779 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.205795 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.205816 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.205831 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:25Z","lastTransitionTime":"2026-02-16T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.282930 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.307819 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.307856 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.307869 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.307883 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.307895 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:25Z","lastTransitionTime":"2026-02-16T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.354077 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/1.log" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.361379 4797 scope.go:117] "RemoveContainer" containerID="99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862" Feb 16 11:07:25 crc kubenswrapper[4797]: E0216 11:07:25.361726 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.384779 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.409739 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.411144 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.411223 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.411246 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.411273 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.411295 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:25Z","lastTransitionTime":"2026-02-16T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.427101 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.442982 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.457087 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.474678 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.493805 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.511160 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.514219 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.514255 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.514267 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.514288 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.514300 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:25Z","lastTransitionTime":"2026-02-16T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.532521 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:24Z\\\",\\\"message\\\":\\\"il,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0216 11:07:24.271850 6310 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0216 11:07:24.271826 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 11:07:24.271853 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0216 11:07:24.271852 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, h\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.546783 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.560030 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.580919 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.593521 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.605629 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.617093 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.617121 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.617129 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.617142 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.617150 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:25Z","lastTransitionTime":"2026-02-16T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.618075 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.633099 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.720970 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.721027 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.721046 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.721068 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.721085 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:25Z","lastTransitionTime":"2026-02-16T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.823519 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.823555 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.823563 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.823597 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.823615 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:25Z","lastTransitionTime":"2026-02-16T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.926712 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.926750 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.926758 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.926771 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.926780 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:25Z","lastTransitionTime":"2026-02-16T11:07:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:25 crc kubenswrapper[4797]: I0216 11:07:25.953971 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:22:27.638029788 +0000 UTC Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.004298 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.019659 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.029013 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.029062 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.029075 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.029092 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.029124 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:26Z","lastTransitionTime":"2026-02-16T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.037146 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.055183 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.068619 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.082681 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.097259 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.116938 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.129121 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.130806 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.130954 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.131015 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.131104 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.131162 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:26Z","lastTransitionTime":"2026-02-16T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.148939 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:24Z\\\",\\\"message\\\":\\\"il,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0216 11:07:24.271850 6310 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0216 11:07:24.271826 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 11:07:24.271853 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0216 11:07:24.271852 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, h\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.157890 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.167368 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.179497 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.189723 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.201562 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.211891 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.233681 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.233733 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.233743 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.233766 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.233777 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:26Z","lastTransitionTime":"2026-02-16T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.335537 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.335615 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.335627 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.335644 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.335654 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:26Z","lastTransitionTime":"2026-02-16T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.438038 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.438070 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.438080 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.438092 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.438102 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:26Z","lastTransitionTime":"2026-02-16T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.544526 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.544607 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.544619 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.544636 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.544648 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:26Z","lastTransitionTime":"2026-02-16T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.647437 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.647480 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.647490 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.647504 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.647514 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:26Z","lastTransitionTime":"2026-02-16T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.751539 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.751595 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.751604 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.751617 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.751627 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:26Z","lastTransitionTime":"2026-02-16T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.795815 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:26 crc kubenswrapper[4797]: E0216 11:07:26.796043 4797 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:26 crc kubenswrapper[4797]: E0216 11:07:26.796170 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs podName:1f19a4ae-a737-4818-82b5-db20cafd45c7 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:30.796141131 +0000 UTC m=+45.516326151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs") pod "network-metrics-daemon-cglwk" (UID: "1f19a4ae-a737-4818-82b5-db20cafd45c7") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.854014 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.854074 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.854092 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.854116 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.854133 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:26Z","lastTransitionTime":"2026-02-16T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.954699 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 20:53:27.08404844 +0000 UTC Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.957138 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.957182 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.957201 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.957223 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.957241 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:26Z","lastTransitionTime":"2026-02-16T11:07:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.982380 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.982426 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.982459 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:26 crc kubenswrapper[4797]: E0216 11:07:26.982683 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:26 crc kubenswrapper[4797]: I0216 11:07:26.982773 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:26 crc kubenswrapper[4797]: E0216 11:07:26.982962 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:26 crc kubenswrapper[4797]: E0216 11:07:26.983103 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:26 crc kubenswrapper[4797]: E0216 11:07:26.983283 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.060389 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.060489 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.060508 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.060533 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.060555 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:27Z","lastTransitionTime":"2026-02-16T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.163637 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.163682 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.163693 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.163709 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.163722 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:27Z","lastTransitionTime":"2026-02-16T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.266416 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.266450 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.266460 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.266474 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.266507 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:27Z","lastTransitionTime":"2026-02-16T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.370916 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.370975 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.370989 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.371012 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.371027 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:27Z","lastTransitionTime":"2026-02-16T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.474840 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.474880 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.474890 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.474904 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.474914 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:27Z","lastTransitionTime":"2026-02-16T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.577235 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.577267 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.577279 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.577292 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.577301 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:27Z","lastTransitionTime":"2026-02-16T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.679931 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.679971 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.679979 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.679991 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.680000 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:27Z","lastTransitionTime":"2026-02-16T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.783028 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.783061 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.783069 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.783082 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.783098 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:27Z","lastTransitionTime":"2026-02-16T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.885520 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.885619 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.885632 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.885649 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.885660 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:27Z","lastTransitionTime":"2026-02-16T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.955308 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 17:31:56.126366981 +0000 UTC Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.987506 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.987696 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.987718 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.987735 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:27 crc kubenswrapper[4797]: I0216 11:07:27.987750 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:27Z","lastTransitionTime":"2026-02-16T11:07:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.090391 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.090438 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.090449 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.090465 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.090476 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:28Z","lastTransitionTime":"2026-02-16T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.193458 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.193538 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.193557 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.193598 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.193615 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:28Z","lastTransitionTime":"2026-02-16T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.297187 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.297243 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.297255 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.297271 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.297283 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:28Z","lastTransitionTime":"2026-02-16T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.400221 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.400247 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.400255 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.400267 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.400276 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:28Z","lastTransitionTime":"2026-02-16T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.503251 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.503313 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.503324 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.503340 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.503350 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:28Z","lastTransitionTime":"2026-02-16T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.605756 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.605800 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.605810 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.605823 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.605832 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:28Z","lastTransitionTime":"2026-02-16T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.708873 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.708925 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.708934 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.708949 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.708959 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:28Z","lastTransitionTime":"2026-02-16T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.811569 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.811644 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.811654 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.811696 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.811708 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:28Z","lastTransitionTime":"2026-02-16T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.914018 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.914081 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.914098 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.914120 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.914137 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:28Z","lastTransitionTime":"2026-02-16T11:07:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.955616 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:34:56.893946745 +0000 UTC Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.982045 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.982074 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.982156 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:28 crc kubenswrapper[4797]: I0216 11:07:28.982214 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:28 crc kubenswrapper[4797]: E0216 11:07:28.982214 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:28 crc kubenswrapper[4797]: E0216 11:07:28.982352 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:28 crc kubenswrapper[4797]: E0216 11:07:28.982456 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:28 crc kubenswrapper[4797]: E0216 11:07:28.982511 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.016776 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.016837 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.016859 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.016887 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.016909 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:29Z","lastTransitionTime":"2026-02-16T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.119449 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.119488 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.119501 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.119519 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.119531 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:29Z","lastTransitionTime":"2026-02-16T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.221943 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.221985 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.221995 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.222008 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.222017 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:29Z","lastTransitionTime":"2026-02-16T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.323998 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.324057 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.324077 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.324104 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.324126 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:29Z","lastTransitionTime":"2026-02-16T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.425860 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.426100 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.426206 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.426310 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.426422 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:29Z","lastTransitionTime":"2026-02-16T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.528849 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.528901 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.528918 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.528943 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.528965 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:29Z","lastTransitionTime":"2026-02-16T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.631486 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.631522 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.631530 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.631543 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.631553 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:29Z","lastTransitionTime":"2026-02-16T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.734068 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.734131 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.734147 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.734169 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.734187 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:29Z","lastTransitionTime":"2026-02-16T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.836332 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.836379 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.836394 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.836411 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.836422 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:29Z","lastTransitionTime":"2026-02-16T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.939181 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.939238 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.939254 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.939278 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.939297 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:29Z","lastTransitionTime":"2026-02-16T11:07:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.956239 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 14:04:23.741462566 +0000 UTC Feb 16 11:07:29 crc kubenswrapper[4797]: I0216 11:07:29.983886 4797 scope.go:117] "RemoveContainer" containerID="cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.042140 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.042191 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.042206 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.042226 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.042241 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:30Z","lastTransitionTime":"2026-02-16T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.144386 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.144436 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.144447 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.144466 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.144479 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:30Z","lastTransitionTime":"2026-02-16T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.246774 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.246822 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.246834 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.246849 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.246863 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:30Z","lastTransitionTime":"2026-02-16T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.349486 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.349522 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.349532 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.349549 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.349561 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:30Z","lastTransitionTime":"2026-02-16T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.381316 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.383476 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803"} Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.384431 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.403411 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.420085 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.433525 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.452124 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.452568 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.452668 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.452726 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.452786 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.452855 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:30Z","lastTransitionTime":"2026-02-16T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.467147 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.481245 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.493535 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.505756 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.525331 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:24Z\\\",\\\"message\\\":\\\"il,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0216 11:07:24.271850 6310 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0216 11:07:24.271826 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 11:07:24.271853 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0216 11:07:24.271852 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, h\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.539149 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.550907 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.560184 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.560238 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.560249 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.560267 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.560281 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:30Z","lastTransitionTime":"2026-02-16T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.566145 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.576180 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.587064 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.600838 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.613119 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:30Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.662922 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.662999 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.663023 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.663051 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.663077 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:30Z","lastTransitionTime":"2026-02-16T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.765848 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.766213 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.766303 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.766466 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.766611 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:30Z","lastTransitionTime":"2026-02-16T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.838912 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:30 crc kubenswrapper[4797]: E0216 11:07:30.839128 4797 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:30 crc kubenswrapper[4797]: E0216 11:07:30.839323 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs podName:1f19a4ae-a737-4818-82b5-db20cafd45c7 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:38.839304044 +0000 UTC m=+53.559489034 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs") pod "network-metrics-daemon-cglwk" (UID: "1f19a4ae-a737-4818-82b5-db20cafd45c7") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.868868 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.868912 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.868922 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.868937 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.868950 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:30Z","lastTransitionTime":"2026-02-16T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.956648 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:34:07.864431283 +0000 UTC Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.971672 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.971703 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.971713 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.971731 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.971742 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:30Z","lastTransitionTime":"2026-02-16T11:07:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.982079 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.982106 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:30 crc kubenswrapper[4797]: E0216 11:07:30.982176 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.982275 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:30 crc kubenswrapper[4797]: I0216 11:07:30.982336 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:30 crc kubenswrapper[4797]: E0216 11:07:30.982444 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:30 crc kubenswrapper[4797]: E0216 11:07:30.982625 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:30 crc kubenswrapper[4797]: E0216 11:07:30.982820 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.074380 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.074437 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.074453 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.074477 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.074493 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:31Z","lastTransitionTime":"2026-02-16T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.177501 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.177559 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.177574 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.177628 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.177645 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:31Z","lastTransitionTime":"2026-02-16T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.280139 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.280177 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.280186 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.280200 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.280211 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:31Z","lastTransitionTime":"2026-02-16T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.382496 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.382555 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.382572 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.382620 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.382640 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:31Z","lastTransitionTime":"2026-02-16T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.485700 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.485743 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.485754 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.485769 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.485781 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:31Z","lastTransitionTime":"2026-02-16T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.588635 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.588671 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.588683 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.588699 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.588711 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:31Z","lastTransitionTime":"2026-02-16T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.691570 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.691633 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.691646 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.691659 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.691669 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:31Z","lastTransitionTime":"2026-02-16T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.794308 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.794364 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.794380 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.794399 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.794416 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:31Z","lastTransitionTime":"2026-02-16T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.897199 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.897436 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.897505 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.897596 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.897656 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:31Z","lastTransitionTime":"2026-02-16T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.957103 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:53:54.668079734 +0000 UTC Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.999679 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.999716 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.999725 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.999741 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:31 crc kubenswrapper[4797]: I0216 11:07:31.999750 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:31Z","lastTransitionTime":"2026-02-16T11:07:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.102464 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.102503 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.102511 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.102524 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.102534 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:32Z","lastTransitionTime":"2026-02-16T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.204997 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.205040 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.205048 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.205062 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.205071 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:32Z","lastTransitionTime":"2026-02-16T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.307399 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.307483 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.307511 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.307539 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.307560 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:32Z","lastTransitionTime":"2026-02-16T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.410470 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.410523 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.410539 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.410559 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.410574 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:32Z","lastTransitionTime":"2026-02-16T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.513907 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.513986 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.514012 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.514041 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.514068 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:32Z","lastTransitionTime":"2026-02-16T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.617295 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.617358 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.617381 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.617410 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.617431 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:32Z","lastTransitionTime":"2026-02-16T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.719748 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.719795 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.719809 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.719826 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.719839 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:32Z","lastTransitionTime":"2026-02-16T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.823035 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.823067 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.823076 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.823088 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.823096 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:32Z","lastTransitionTime":"2026-02-16T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.925273 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.925392 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.925413 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.925438 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.925451 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:32Z","lastTransitionTime":"2026-02-16T11:07:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.957662 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 06:53:11.88920455 +0000 UTC Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.981778 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.981835 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.981809 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:32 crc kubenswrapper[4797]: E0216 11:07:32.981966 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:32 crc kubenswrapper[4797]: I0216 11:07:32.981780 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:32 crc kubenswrapper[4797]: E0216 11:07:32.982093 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:32 crc kubenswrapper[4797]: E0216 11:07:32.982258 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:32 crc kubenswrapper[4797]: E0216 11:07:32.982324 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.028312 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.028376 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.028388 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.028406 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.028419 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.131310 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.131381 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.131394 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.131441 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.131455 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.235075 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.235144 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.235162 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.235188 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.235207 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.338068 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.338133 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.338144 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.338160 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.338171 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.440687 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.440735 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.440746 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.440764 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.440777 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.543943 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.543977 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.543987 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.544001 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.544009 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.647397 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.647462 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.647487 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.647516 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.647538 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.750944 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.751002 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.751019 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.751055 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.751075 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.828934 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.829188 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.829432 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.829691 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.829909 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: E0216 11:07:33.848973 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:33Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.853930 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.853985 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.854001 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.854024 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.854040 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: E0216 11:07:33.876297 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:33Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.880442 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.880489 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.880508 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.880530 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.880547 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: E0216 11:07:33.899200 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:33Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.903794 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.903853 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.903873 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.903895 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.903914 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: E0216 11:07:33.922019 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:33Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.926841 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.927038 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.927148 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.927259 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.927353 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: E0216 11:07:33.945357 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:33Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:33 crc kubenswrapper[4797]: E0216 11:07:33.945525 4797 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.947657 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.947695 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.947705 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.947719 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.947730 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:33Z","lastTransitionTime":"2026-02-16T11:07:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:33 crc kubenswrapper[4797]: I0216 11:07:33.958449 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 10:58:46.404335989 +0000 UTC Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.051204 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.051450 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.051513 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.051599 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.051660 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:34Z","lastTransitionTime":"2026-02-16T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.154250 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.154474 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.154633 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.154771 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.154914 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:34Z","lastTransitionTime":"2026-02-16T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.256740 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.256780 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.256790 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.256803 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.256812 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:34Z","lastTransitionTime":"2026-02-16T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.359156 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.359365 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.359463 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.359536 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.359626 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:34Z","lastTransitionTime":"2026-02-16T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.462931 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.462984 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.463002 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.463083 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.463102 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:34Z","lastTransitionTime":"2026-02-16T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.566183 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.566450 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.566531 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.566618 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.566679 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:34Z","lastTransitionTime":"2026-02-16T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.669456 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.669502 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.669521 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.669546 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.669558 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:34Z","lastTransitionTime":"2026-02-16T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.777291 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.777366 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.777571 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.777661 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.777682 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:34Z","lastTransitionTime":"2026-02-16T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.881656 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.881971 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.882148 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.882359 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.882530 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:34Z","lastTransitionTime":"2026-02-16T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.958734 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 15:54:29.067732387 +0000 UTC Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.981688 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.981819 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.981713 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.981811 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:34 crc kubenswrapper[4797]: E0216 11:07:34.982118 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:34 crc kubenswrapper[4797]: E0216 11:07:34.982232 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:34 crc kubenswrapper[4797]: E0216 11:07:34.982374 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:34 crc kubenswrapper[4797]: E0216 11:07:34.982414 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.984766 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.984877 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.984902 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.984928 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:34 crc kubenswrapper[4797]: I0216 11:07:34.984951 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:34Z","lastTransitionTime":"2026-02-16T11:07:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.086967 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.087028 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.087039 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.087055 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.087066 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:35Z","lastTransitionTime":"2026-02-16T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.188826 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.189240 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.189406 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.190002 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.190185 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:35Z","lastTransitionTime":"2026-02-16T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.292933 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.293264 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.293536 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.293775 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.293940 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:35Z","lastTransitionTime":"2026-02-16T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.396137 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.396172 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.396189 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.396208 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.396223 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:35Z","lastTransitionTime":"2026-02-16T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.498985 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.499028 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.499041 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.499055 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.499067 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:35Z","lastTransitionTime":"2026-02-16T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.601484 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.601779 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.601871 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.601956 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.602025 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:35Z","lastTransitionTime":"2026-02-16T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.704748 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.704781 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.704794 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.704810 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.704822 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:35Z","lastTransitionTime":"2026-02-16T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.807303 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.807553 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.807643 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.807716 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.807772 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:35Z","lastTransitionTime":"2026-02-16T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.910096 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.910357 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.910441 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.910533 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.910627 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:35Z","lastTransitionTime":"2026-02-16T11:07:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:35 crc kubenswrapper[4797]: I0216 11:07:35.959173 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 16:21:14.862907593 +0000 UTC Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.000764 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:35Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.013965 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.014191 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.014268 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.014347 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.014418 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:36Z","lastTransitionTime":"2026-02-16T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.017992 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.035481 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.052039 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.070688 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.087353 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.100147 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.115802 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.116431 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.116464 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.116475 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.116490 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.116499 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:36Z","lastTransitionTime":"2026-02-16T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.143978 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:24Z\\\",\\\"message\\\":\\\"il,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0216 11:07:24.271850 6310 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0216 11:07:24.271826 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 11:07:24.271853 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0216 11:07:24.271852 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, h\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.155949 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.166293 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.179467 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.194377 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.208966 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.218620 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.219138 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.219189 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.219202 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.219220 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.219233 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:36Z","lastTransitionTime":"2026-02-16T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.228760 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.322746 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.322826 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.322851 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.322878 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.322905 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:36Z","lastTransitionTime":"2026-02-16T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.426392 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.426445 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.426456 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.426475 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.426487 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:36Z","lastTransitionTime":"2026-02-16T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.529608 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.529641 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.529652 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.529669 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.529681 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:36Z","lastTransitionTime":"2026-02-16T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.631974 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.632036 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.632056 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.632077 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.632091 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:36Z","lastTransitionTime":"2026-02-16T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.734711 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.734759 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.734772 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.734791 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.734805 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:36Z","lastTransitionTime":"2026-02-16T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.836765 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.836805 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.836816 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.836832 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.836844 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:36Z","lastTransitionTime":"2026-02-16T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.939308 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.939351 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.939360 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.939377 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.939388 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:36Z","lastTransitionTime":"2026-02-16T11:07:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.959571 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:59:36.000807509 +0000 UTC Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.982052 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.982048 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:36 crc kubenswrapper[4797]: E0216 11:07:36.982200 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.982078 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:36 crc kubenswrapper[4797]: I0216 11:07:36.982068 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:36 crc kubenswrapper[4797]: E0216 11:07:36.982314 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:36 crc kubenswrapper[4797]: E0216 11:07:36.982376 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:36 crc kubenswrapper[4797]: E0216 11:07:36.982483 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.042106 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.042143 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.042163 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.042178 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.042205 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:37Z","lastTransitionTime":"2026-02-16T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.145206 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.145271 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.145282 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.145301 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.145311 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:37Z","lastTransitionTime":"2026-02-16T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.248324 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.248374 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.248385 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.248403 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.248415 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:37Z","lastTransitionTime":"2026-02-16T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.298408 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.306554 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.310810 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.321099 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.334462 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.349448 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.351094 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.351166 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.351193 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.351234 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.351261 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:37Z","lastTransitionTime":"2026-02-16T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.367833 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.388927 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.406644 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.428745 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.446437 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.453821 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.453922 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.453961 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.454043 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.454063 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:37Z","lastTransitionTime":"2026-02-16T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.468394 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.490770 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.510023 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.532118 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.551200 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.556397 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.556451 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.556467 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.556489 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.556507 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:37Z","lastTransitionTime":"2026-02-16T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.573642 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:24Z\\\",\\\"message\\\":\\\"il,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0216 11:07:24.271850 6310 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0216 11:07:24.271826 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 11:07:24.271853 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0216 11:07:24.271852 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, h\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.588811 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:37Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.659559 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.659644 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.659660 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.659682 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.659699 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:37Z","lastTransitionTime":"2026-02-16T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.761925 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.762008 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.762031 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.762057 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.762074 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:37Z","lastTransitionTime":"2026-02-16T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.864929 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.864980 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.865011 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.865030 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.865043 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:37Z","lastTransitionTime":"2026-02-16T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.960085 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:00:32.242369207 +0000 UTC Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.971911 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.971953 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.971965 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.971983 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:37 crc kubenswrapper[4797]: I0216 11:07:37.971997 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:37Z","lastTransitionTime":"2026-02-16T11:07:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.073771 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.073835 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.073852 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.073876 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.073892 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:38Z","lastTransitionTime":"2026-02-16T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.176636 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.176960 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.177100 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.177268 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.177406 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:38Z","lastTransitionTime":"2026-02-16T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.280645 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.280711 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.280728 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.280752 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.280769 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:38Z","lastTransitionTime":"2026-02-16T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.383863 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.383945 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.383959 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.383978 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.383990 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:38Z","lastTransitionTime":"2026-02-16T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.485655 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.485892 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.486290 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.486435 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.486515 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:38Z","lastTransitionTime":"2026-02-16T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.589094 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.589707 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.589932 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.590127 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.590300 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:38Z","lastTransitionTime":"2026-02-16T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.694044 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.694084 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.694093 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.694106 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.694114 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:38Z","lastTransitionTime":"2026-02-16T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.797035 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.797115 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.797136 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.797160 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.797179 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:38Z","lastTransitionTime":"2026-02-16T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.817452 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.817659 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.817698 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:08:10.817673014 +0000 UTC m=+85.537857994 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.817769 4797 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.817824 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:08:10.817811778 +0000 UTC m=+85.537996748 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.900051 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.900100 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.900116 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.900140 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.900156 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:38Z","lastTransitionTime":"2026-02-16T11:07:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.918758 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.918802 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.918838 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.918865 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.919005 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.919024 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.919018 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.919049 4797 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.919037 4797 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.919080 4797 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.919163 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 11:08:10.919143758 +0000 UTC m=+85.639328738 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.919076 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.919187 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:08:10.919176438 +0000 UTC m=+85.639361428 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.919198 4797 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.919210 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs podName:1f19a4ae-a737-4818-82b5-db20cafd45c7 nodeName:}" failed. No retries permitted until 2026-02-16 11:07:54.919200349 +0000 UTC m=+69.639385439 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs") pod "network-metrics-daemon-cglwk" (UID: "1f19a4ae-a737-4818-82b5-db20cafd45c7") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.919234 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 11:08:10.91922609 +0000 UTC m=+85.639411190 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.960658 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 09:48:29.660346809 +0000 UTC Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.982139 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.982205 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.982158 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:38 crc kubenswrapper[4797]: I0216 11:07:38.982331 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.982545 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.983035 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.983119 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:38 crc kubenswrapper[4797]: E0216 11:07:38.983181 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.003846 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.003892 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.003903 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.003921 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.003937 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:39Z","lastTransitionTime":"2026-02-16T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.107138 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.107171 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.107179 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.107191 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.107201 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:39Z","lastTransitionTime":"2026-02-16T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.209610 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.209649 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.209661 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.209678 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.209688 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:39Z","lastTransitionTime":"2026-02-16T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.312036 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.312372 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.312535 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.312725 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.312867 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:39Z","lastTransitionTime":"2026-02-16T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.415744 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.416013 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.416197 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.416377 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.416547 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:39Z","lastTransitionTime":"2026-02-16T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.519712 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.520134 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.520287 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.520433 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.520554 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:39Z","lastTransitionTime":"2026-02-16T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.623868 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.623910 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.623922 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.623937 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.623949 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:39Z","lastTransitionTime":"2026-02-16T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.725850 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.725887 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.725897 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.725913 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.725923 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:39Z","lastTransitionTime":"2026-02-16T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.829062 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.829119 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.829133 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.829154 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.829169 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:39Z","lastTransitionTime":"2026-02-16T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.931903 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.931977 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.931996 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.932022 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.932039 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:39Z","lastTransitionTime":"2026-02-16T11:07:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.961224 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 12:22:52.62958975 +0000 UTC Feb 16 11:07:39 crc kubenswrapper[4797]: I0216 11:07:39.983800 4797 scope.go:117] "RemoveContainer" containerID="99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.035037 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.035074 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.035084 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.035100 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.035109 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:40Z","lastTransitionTime":"2026-02-16T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.137495 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.137524 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.137535 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.137551 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.137565 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:40Z","lastTransitionTime":"2026-02-16T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.239647 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.239707 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.239726 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.239754 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.239772 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:40Z","lastTransitionTime":"2026-02-16T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.342774 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.342835 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.342851 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.342876 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.342894 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:40Z","lastTransitionTime":"2026-02-16T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.416977 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/1.log" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.444756 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.444789 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.444797 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.444810 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.444818 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:40Z","lastTransitionTime":"2026-02-16T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.454233 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerStarted","Data":"f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c"} Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.454697 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.476184 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.492495 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.506779 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.528450 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.547031 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.547075 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.547087 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.547103 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.547113 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:40Z","lastTransitionTime":"2026-02-16T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.548341 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.570202 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.582842 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.594450 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.610837 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:24Z\\\",\\\"message\\\":\\\"il,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0216 11:07:24.271850 6310 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0216 11:07:24.271826 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 11:07:24.271853 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0216 11:07:24.271852 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, h\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.623151 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.634496 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.645668 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.648701 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.648728 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.648736 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.648749 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.648759 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:40Z","lastTransitionTime":"2026-02-16T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.656761 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.666901 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.682669 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.696517 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.708913 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:40Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.751024 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.751054 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.751066 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.751081 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.751093 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:40Z","lastTransitionTime":"2026-02-16T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.852930 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.852969 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.852978 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.852992 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.853002 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:40Z","lastTransitionTime":"2026-02-16T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.956484 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.956756 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.956938 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.957087 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.957276 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:40Z","lastTransitionTime":"2026-02-16T11:07:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.961711 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 09:50:59.648192961 +0000 UTC Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.982261 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.982312 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.982351 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:40 crc kubenswrapper[4797]: I0216 11:07:40.982262 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:40 crc kubenswrapper[4797]: E0216 11:07:40.982456 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:40 crc kubenswrapper[4797]: E0216 11:07:40.982683 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:40 crc kubenswrapper[4797]: E0216 11:07:40.982801 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:40 crc kubenswrapper[4797]: E0216 11:07:40.982903 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.060357 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.060433 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.060452 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.060476 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.060494 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:41Z","lastTransitionTime":"2026-02-16T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.163073 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.163129 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.163144 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.163166 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.163186 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:41Z","lastTransitionTime":"2026-02-16T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.266602 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.266635 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.266647 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.266664 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.266677 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:41Z","lastTransitionTime":"2026-02-16T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.369346 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.369389 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.369400 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.369414 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.369426 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:41Z","lastTransitionTime":"2026-02-16T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.460402 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/2.log" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.461134 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/1.log" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.463936 4797 generic.go:334] "Generic (PLEG): container finished" podID="812f1f08-469d-44f4-907e-60ad61837364" containerID="f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c" exitCode=1 Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.463989 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c"} Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.464035 4797 scope.go:117] "RemoveContainer" containerID="99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.465005 4797 scope.go:117] "RemoveContainer" containerID="f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c" Feb 16 11:07:41 crc kubenswrapper[4797]: E0216 11:07:41.465252 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.472053 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.472103 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.472117 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.472137 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.472153 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:41Z","lastTransitionTime":"2026-02-16T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.482311 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.495970 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.509287 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.530969 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.545831 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.560912 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.572991 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.574281 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.574351 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.574377 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.574406 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.574427 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:41Z","lastTransitionTime":"2026-02-16T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.584366 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.603870 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99c2533e5609a12f662784734fd7861beb843151e0a33ebbe921c3ded4080862\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:24Z\\\",\\\"message\\\":\\\"il,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI0216 11:07:24.271850 6310 lb_config.go:1031] Cluster endpoints for openshift-network-diagnostics/network-check-target for network=default are: map[]\\\\nI0216 11:07:24.271826 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0216 11:07:24.271853 6310 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:73135118-cf1b-4568-bd31-2f50308bf69d}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0216 11:07:24.271852 6310 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, h\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:40Z\\\",\\\"message\\\":\\\" crc\\\\nI0216 11:07:40.939310 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939322 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939328 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 11:07:40.939327 6482 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0216 11:07:40.939331 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.614900 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.633718 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.650951 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.665903 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.678543 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.678628 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.678643 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.678658 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.678671 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:41Z","lastTransitionTime":"2026-02-16T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.680948 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.693697 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.710232 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.726771 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:41Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.781217 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.781265 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.781277 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.781296 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.781308 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:41Z","lastTransitionTime":"2026-02-16T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.883736 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.883788 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.883804 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.883826 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.883844 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:41Z","lastTransitionTime":"2026-02-16T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.962709 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 15:50:38.384346128 +0000 UTC Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.986552 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.986634 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.986651 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.986674 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:41 crc kubenswrapper[4797]: I0216 11:07:41.986691 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:41Z","lastTransitionTime":"2026-02-16T11:07:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.089074 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.089564 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.089738 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.089853 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.090023 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:42Z","lastTransitionTime":"2026-02-16T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.193787 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.193837 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.193860 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.193983 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.194002 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:42Z","lastTransitionTime":"2026-02-16T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.296461 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.296520 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.296534 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.296549 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.296560 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:42Z","lastTransitionTime":"2026-02-16T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.398703 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.398761 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.398779 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.398802 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.398819 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:42Z","lastTransitionTime":"2026-02-16T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.468644 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/2.log" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.472477 4797 scope.go:117] "RemoveContainer" containerID="f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c" Feb 16 11:07:42 crc kubenswrapper[4797]: E0216 11:07:42.472770 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.493947 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:40Z\\\",\\\"message\\\":\\\" crc\\\\nI0216 11:07:40.939310 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939322 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939328 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 11:07:40.939327 6482 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0216 11:07:40.939331 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.500634 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.500677 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.500691 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.500705 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.500715 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:42Z","lastTransitionTime":"2026-02-16T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.510887 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.523951 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.540378 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.553345 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.565125 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.576654 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.589366 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.600850 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.602620 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.602648 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.602657 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.602707 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.602718 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:42Z","lastTransitionTime":"2026-02-16T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.618461 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.629978 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.642157 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.656045 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.669988 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.689227 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.704542 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.704775 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.704842 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.704901 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.704955 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:42Z","lastTransitionTime":"2026-02-16T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.705396 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.717839 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:42Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.807900 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.807946 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.807957 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.807972 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.807984 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:42Z","lastTransitionTime":"2026-02-16T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.919680 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.919724 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.919735 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.919749 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.919758 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:42Z","lastTransitionTime":"2026-02-16T11:07:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.963813 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:47:11.285604963 +0000 UTC Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.982298 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.982350 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.982402 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:42 crc kubenswrapper[4797]: E0216 11:07:42.982428 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:42 crc kubenswrapper[4797]: I0216 11:07:42.982457 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:42 crc kubenswrapper[4797]: E0216 11:07:42.982561 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:42 crc kubenswrapper[4797]: E0216 11:07:42.982767 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:42 crc kubenswrapper[4797]: E0216 11:07:42.982853 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.022755 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.022797 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.022807 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.022822 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.022834 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:43Z","lastTransitionTime":"2026-02-16T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.125480 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.126064 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.126149 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.126228 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.126304 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:43Z","lastTransitionTime":"2026-02-16T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.229977 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.230428 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.230677 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.230856 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.231002 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:43Z","lastTransitionTime":"2026-02-16T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.334103 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.334162 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.334178 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.334200 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.334217 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:43Z","lastTransitionTime":"2026-02-16T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.436367 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.436414 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.436425 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.436443 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.436457 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:43Z","lastTransitionTime":"2026-02-16T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.541484 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.541790 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.541865 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.541942 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.542031 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:43Z","lastTransitionTime":"2026-02-16T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.644894 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.644952 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.644964 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.644978 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.644990 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:43Z","lastTransitionTime":"2026-02-16T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.747492 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.747556 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.747568 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.747626 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.747640 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:43Z","lastTransitionTime":"2026-02-16T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.854461 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.854503 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.854514 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.854538 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.854551 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:43Z","lastTransitionTime":"2026-02-16T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.958627 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.958664 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.958675 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.958692 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.958702 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:43Z","lastTransitionTime":"2026-02-16T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.964667 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:27:17.009772888 +0000 UTC Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.986563 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.986631 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.986647 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.986666 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:43 crc kubenswrapper[4797]: I0216 11:07:43.986680 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:43Z","lastTransitionTime":"2026-02-16T11:07:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: E0216 11:07:44.001465 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:43Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.005923 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.005958 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.005968 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.005981 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.005990 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:44Z","lastTransitionTime":"2026-02-16T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: E0216 11:07:44.019469 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.023613 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.023801 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.023906 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.023996 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.024113 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:44Z","lastTransitionTime":"2026-02-16T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: E0216 11:07:44.035728 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.040286 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.040322 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.040331 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.040347 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.040356 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:44Z","lastTransitionTime":"2026-02-16T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: E0216 11:07:44.051770 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.055185 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.055225 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.055233 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.055246 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.055255 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:44Z","lastTransitionTime":"2026-02-16T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: E0216 11:07:44.068647 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: E0216 11:07:44.068815 4797 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.070588 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.070690 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.070781 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.070868 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.070944 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:44Z","lastTransitionTime":"2026-02-16T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.173721 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.173775 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.173786 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.173803 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.173816 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:44Z","lastTransitionTime":"2026-02-16T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.275625 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.275669 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.275680 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.275697 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.275711 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:44Z","lastTransitionTime":"2026-02-16T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.389996 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.390541 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.390568 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.390627 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.390648 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:44Z","lastTransitionTime":"2026-02-16T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.501247 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.501316 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.501333 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.501356 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.501372 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:44Z","lastTransitionTime":"2026-02-16T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.603542 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.603604 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.603617 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.603631 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.603643 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:44Z","lastTransitionTime":"2026-02-16T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.706547 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.706624 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.706638 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.706654 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.706666 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:44Z","lastTransitionTime":"2026-02-16T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.719520 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.738806 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.756665 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.777196 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.794265 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.807323 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.808770 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.808803 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.808814 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.808830 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.808841 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:44Z","lastTransitionTime":"2026-02-16T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.820069 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.838024 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:40Z\\\",\\\"message\\\":\\\" crc\\\\nI0216 11:07:40.939310 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939322 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939328 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 11:07:40.939327 6482 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0216 11:07:40.939331 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.849730 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.862953 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.877015 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.892695 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.902709 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.911783 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.911806 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.911815 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.911828 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.911837 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:44Z","lastTransitionTime":"2026-02-16T11:07:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.916123 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.927427 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.941138 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.952253 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.962431 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:44Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.965618 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 10:44:52.921864228 +0000 UTC Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.982381 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.982445 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:44 crc kubenswrapper[4797]: E0216 11:07:44.982505 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:44 crc kubenswrapper[4797]: E0216 11:07:44.982641 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.982685 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:44 crc kubenswrapper[4797]: I0216 11:07:44.982754 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:44 crc kubenswrapper[4797]: E0216 11:07:44.982891 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:44 crc kubenswrapper[4797]: E0216 11:07:44.982992 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.014518 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.014565 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.014594 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.014614 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.014627 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:45Z","lastTransitionTime":"2026-02-16T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.116933 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.116964 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.116974 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.116988 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.117000 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:45Z","lastTransitionTime":"2026-02-16T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.219901 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.219938 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.219946 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.219960 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.219969 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:45Z","lastTransitionTime":"2026-02-16T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.322713 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.322761 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.322772 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.322788 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.322798 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:45Z","lastTransitionTime":"2026-02-16T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.425941 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.426006 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.426028 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.426056 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.426081 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:45Z","lastTransitionTime":"2026-02-16T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.528731 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.529054 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.529142 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.529272 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.529383 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:45Z","lastTransitionTime":"2026-02-16T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.632680 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.633016 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.633219 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.633419 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.633570 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:45Z","lastTransitionTime":"2026-02-16T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.736664 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.737115 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.737316 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.737612 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.737745 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:45Z","lastTransitionTime":"2026-02-16T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.840950 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.840995 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.841009 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.841028 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.841042 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:45Z","lastTransitionTime":"2026-02-16T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.943106 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.943945 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.944059 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.944170 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.944277 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:45Z","lastTransitionTime":"2026-02-16T11:07:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:45 crc kubenswrapper[4797]: I0216 11:07:45.966767 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 23:16:26.106926151 +0000 UTC Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.002624 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.018454 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.031842 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.044302 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.047024 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.047096 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.047118 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.047145 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.047164 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:46Z","lastTransitionTime":"2026-02-16T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.058899 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.077260 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.095979 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.110665 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.129547 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.148830 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.149794 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.149957 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.150058 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.150161 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.150254 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:46Z","lastTransitionTime":"2026-02-16T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.169674 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.188513 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.206680 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.226540 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.237611 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.251621 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.252776 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.252824 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.252861 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.252878 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.252890 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:46Z","lastTransitionTime":"2026-02-16T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.280899 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:40Z\\\",\\\"message\\\":\\\" crc\\\\nI0216 11:07:40.939310 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939322 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939328 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 11:07:40.939327 6482 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0216 11:07:40.939331 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:46Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.355797 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.355839 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.355850 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.355865 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.355878 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:46Z","lastTransitionTime":"2026-02-16T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.458098 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.458142 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.458154 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.458170 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.458183 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:46Z","lastTransitionTime":"2026-02-16T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.561148 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.561270 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.561289 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.561312 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.561328 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:46Z","lastTransitionTime":"2026-02-16T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.665539 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.665794 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.665876 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.665955 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.666052 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:46Z","lastTransitionTime":"2026-02-16T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.768958 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.768992 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.769004 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.769022 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.769035 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:46Z","lastTransitionTime":"2026-02-16T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.871300 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.871359 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.871386 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.871400 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.871411 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:46Z","lastTransitionTime":"2026-02-16T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.967108 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:26:20.04512395 +0000 UTC Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.974753 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.974789 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.974799 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.974813 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.974823 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:46Z","lastTransitionTime":"2026-02-16T11:07:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.982225 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.982264 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.982326 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:46 crc kubenswrapper[4797]: I0216 11:07:46.982366 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:46 crc kubenswrapper[4797]: E0216 11:07:46.982773 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:46 crc kubenswrapper[4797]: E0216 11:07:46.982895 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:46 crc kubenswrapper[4797]: E0216 11:07:46.983024 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:46 crc kubenswrapper[4797]: E0216 11:07:46.983098 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.078070 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.078151 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.078171 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.078204 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.078232 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:47Z","lastTransitionTime":"2026-02-16T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.180711 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.180763 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.180776 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.180792 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.180805 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:47Z","lastTransitionTime":"2026-02-16T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.283219 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.283247 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.283257 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.283274 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.283286 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:47Z","lastTransitionTime":"2026-02-16T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.386192 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.386363 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.386389 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.386445 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.386467 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:47Z","lastTransitionTime":"2026-02-16T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.489069 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.489122 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.489140 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.489161 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.489177 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:47Z","lastTransitionTime":"2026-02-16T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.591398 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.591468 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.591491 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.591520 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.591542 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:47Z","lastTransitionTime":"2026-02-16T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.693738 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.693785 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.693801 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.693824 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.693842 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:47Z","lastTransitionTime":"2026-02-16T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.797072 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.797126 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.797144 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.797167 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.797184 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:47Z","lastTransitionTime":"2026-02-16T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.900074 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.900115 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.900127 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.900143 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.900155 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:47Z","lastTransitionTime":"2026-02-16T11:07:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:47 crc kubenswrapper[4797]: I0216 11:07:47.967353 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 14:04:31.431658925 +0000 UTC Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.002781 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.002823 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.002833 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.002849 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.002861 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:48Z","lastTransitionTime":"2026-02-16T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.105425 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.105497 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.105509 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.105527 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.105543 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:48Z","lastTransitionTime":"2026-02-16T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.208390 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.208436 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.208449 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.208467 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.208480 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:48Z","lastTransitionTime":"2026-02-16T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.311014 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.311055 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.311063 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.311076 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.311085 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:48Z","lastTransitionTime":"2026-02-16T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.413982 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.414036 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.414050 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.414068 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.414081 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:48Z","lastTransitionTime":"2026-02-16T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.516524 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.516567 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.516596 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.516611 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.516625 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:48Z","lastTransitionTime":"2026-02-16T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.619168 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.619210 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.619220 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.619236 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.619246 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:48Z","lastTransitionTime":"2026-02-16T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.721001 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.721054 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.721067 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.721083 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.721096 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:48Z","lastTransitionTime":"2026-02-16T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.824332 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.824387 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.824396 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.824409 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.824419 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:48Z","lastTransitionTime":"2026-02-16T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.925947 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.926010 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.926029 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.926049 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.926064 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:48Z","lastTransitionTime":"2026-02-16T11:07:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.968296 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:14:58.047863953 +0000 UTC Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.982564 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.983184 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:48 crc kubenswrapper[4797]: E0216 11:07:48.983185 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.983214 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:48 crc kubenswrapper[4797]: I0216 11:07:48.983184 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:48 crc kubenswrapper[4797]: E0216 11:07:48.983532 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:48 crc kubenswrapper[4797]: E0216 11:07:48.983836 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:48 crc kubenswrapper[4797]: E0216 11:07:48.984124 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.028402 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.028437 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.028449 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.028472 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.028486 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:49Z","lastTransitionTime":"2026-02-16T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.130796 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.130828 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.130836 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.130848 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.130857 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:49Z","lastTransitionTime":"2026-02-16T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.233083 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.233134 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.233148 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.233169 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.233186 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:49Z","lastTransitionTime":"2026-02-16T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.336322 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.336373 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.336385 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.336401 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.336414 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:49Z","lastTransitionTime":"2026-02-16T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.439004 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.439042 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.439054 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.439068 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.439078 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:49Z","lastTransitionTime":"2026-02-16T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.541911 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.541956 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.541967 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.541983 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.541994 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:49Z","lastTransitionTime":"2026-02-16T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.645051 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.645104 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.645116 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.645133 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.645148 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:49Z","lastTransitionTime":"2026-02-16T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.747926 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.747976 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.747988 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.748008 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.748023 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:49Z","lastTransitionTime":"2026-02-16T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.850898 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.850931 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.850942 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.850959 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.850969 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:49Z","lastTransitionTime":"2026-02-16T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.953269 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.953320 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.953333 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.953352 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.953364 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:49Z","lastTransitionTime":"2026-02-16T11:07:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:49 crc kubenswrapper[4797]: I0216 11:07:49.968739 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:43:41.210269617 +0000 UTC Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.055961 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.056016 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.056035 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.056058 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.056076 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:50Z","lastTransitionTime":"2026-02-16T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.159307 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.159350 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.159364 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.159381 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.159394 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:50Z","lastTransitionTime":"2026-02-16T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.261974 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.262024 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.262038 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.262061 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.262077 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:50Z","lastTransitionTime":"2026-02-16T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.364618 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.364666 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.364679 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.364697 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.364712 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:50Z","lastTransitionTime":"2026-02-16T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.466562 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.466611 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.466621 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.466635 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.466647 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:50Z","lastTransitionTime":"2026-02-16T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.568113 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.568164 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.568177 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.568194 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.568207 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:50Z","lastTransitionTime":"2026-02-16T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.670157 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.670196 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.670209 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.670225 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.670235 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:50Z","lastTransitionTime":"2026-02-16T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.772455 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.772489 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.772500 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.772517 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.772527 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:50Z","lastTransitionTime":"2026-02-16T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.875163 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.875219 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.875237 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.875256 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.875268 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:50Z","lastTransitionTime":"2026-02-16T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.969297 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 18:35:06.045538878 +0000 UTC Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.978207 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.978243 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.978256 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.978273 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.978284 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:50Z","lastTransitionTime":"2026-02-16T11:07:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.982514 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.982548 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.982623 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:50 crc kubenswrapper[4797]: I0216 11:07:50.982635 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:50 crc kubenswrapper[4797]: E0216 11:07:50.982736 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:50 crc kubenswrapper[4797]: E0216 11:07:50.982809 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:50 crc kubenswrapper[4797]: E0216 11:07:50.982890 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:50 crc kubenswrapper[4797]: E0216 11:07:50.982992 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.080328 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.080371 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.080382 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.080397 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.080408 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:51Z","lastTransitionTime":"2026-02-16T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.183260 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.183307 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.183317 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.183334 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.183348 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:51Z","lastTransitionTime":"2026-02-16T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.286068 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.286106 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.286115 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.286133 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.286143 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:51Z","lastTransitionTime":"2026-02-16T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.388568 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.388626 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.388635 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.388651 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.388660 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:51Z","lastTransitionTime":"2026-02-16T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.491200 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.491240 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.491258 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.491275 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.491287 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:51Z","lastTransitionTime":"2026-02-16T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.593850 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.593885 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.593898 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.593914 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.593926 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:51Z","lastTransitionTime":"2026-02-16T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.696287 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.696329 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.696338 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.696351 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.696360 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:51Z","lastTransitionTime":"2026-02-16T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.802216 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.802274 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.802287 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.802303 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.802317 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:51Z","lastTransitionTime":"2026-02-16T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.905387 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.905426 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.905436 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.905451 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.905462 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:51Z","lastTransitionTime":"2026-02-16T11:07:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:51 crc kubenswrapper[4797]: I0216 11:07:51.970115 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 13:07:40.517796702 +0000 UTC Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.007416 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.007462 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.007470 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.007484 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.007499 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:52Z","lastTransitionTime":"2026-02-16T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.110220 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.110298 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.110308 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.110325 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.110337 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:52Z","lastTransitionTime":"2026-02-16T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.212527 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.212593 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.212606 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.212622 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.212633 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:52Z","lastTransitionTime":"2026-02-16T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.315245 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.315295 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.315304 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.315319 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.315330 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:52Z","lastTransitionTime":"2026-02-16T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.417814 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.417861 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.418548 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.418608 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.418626 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:52Z","lastTransitionTime":"2026-02-16T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.520819 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.520864 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.520872 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.520889 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.520899 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:52Z","lastTransitionTime":"2026-02-16T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.623630 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.623701 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.623713 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.623735 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.623748 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:52Z","lastTransitionTime":"2026-02-16T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.725795 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.725845 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.725857 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.725875 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.725889 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:52Z","lastTransitionTime":"2026-02-16T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.828112 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.828155 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.828163 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.828178 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.828190 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:52Z","lastTransitionTime":"2026-02-16T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.931559 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.931627 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.931640 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.931657 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.931670 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:52Z","lastTransitionTime":"2026-02-16T11:07:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.970840 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:52:05.469328204 +0000 UTC Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.982279 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:52 crc kubenswrapper[4797]: E0216 11:07:52.982422 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.982446 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.982464 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:52 crc kubenswrapper[4797]: I0216 11:07:52.982495 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:52 crc kubenswrapper[4797]: E0216 11:07:52.982597 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:52 crc kubenswrapper[4797]: E0216 11:07:52.982675 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:52 crc kubenswrapper[4797]: E0216 11:07:52.982791 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.033835 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.033868 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.033876 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.033889 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.033898 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:53Z","lastTransitionTime":"2026-02-16T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.136089 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.136146 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.136157 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.136174 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.136188 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:53Z","lastTransitionTime":"2026-02-16T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.238309 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.238349 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.238361 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.238379 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.238391 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:53Z","lastTransitionTime":"2026-02-16T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.341262 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.341302 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.341312 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.341327 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.341337 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:53Z","lastTransitionTime":"2026-02-16T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.444005 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.444068 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.444081 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.444097 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.444109 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:53Z","lastTransitionTime":"2026-02-16T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.546042 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.546076 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.546084 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.546097 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.546108 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:53Z","lastTransitionTime":"2026-02-16T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.648214 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.648262 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.648275 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.648291 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.648305 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:53Z","lastTransitionTime":"2026-02-16T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.750406 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.750446 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.750454 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.750467 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.750480 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:53Z","lastTransitionTime":"2026-02-16T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.852793 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.852862 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.852875 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.852894 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.852905 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:53Z","lastTransitionTime":"2026-02-16T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.955315 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.955359 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.955369 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.955384 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.955395 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:53Z","lastTransitionTime":"2026-02-16T11:07:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:53 crc kubenswrapper[4797]: I0216 11:07:53.971544 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:14:55.200919857 +0000 UTC Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.058142 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.058207 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.058219 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.058245 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.058257 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.160640 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.160680 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.160688 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.160703 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.160713 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.263285 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.263329 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.263341 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.263357 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.263368 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.288774 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.288809 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.288817 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.288830 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.288839 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: E0216 11:07:54.302161 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.305692 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.305716 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.305726 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.305738 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.305753 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: E0216 11:07:54.316220 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.319473 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.319495 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.319505 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.319519 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.319529 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: E0216 11:07:54.330055 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.333118 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.333150 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.333161 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.333174 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.333183 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: E0216 11:07:54.344384 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.347894 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.348027 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.348138 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.348230 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.348288 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: E0216 11:07:54.361316 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:54Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:54 crc kubenswrapper[4797]: E0216 11:07:54.361609 4797 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.366065 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.366156 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.366170 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.366185 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.366198 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.468470 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.468497 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.468505 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.468517 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.468526 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.570734 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.570774 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.570785 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.570802 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.570814 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.673245 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.673278 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.673286 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.673300 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.673310 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.775092 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.775125 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.775136 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.775151 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.775160 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.877518 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.877552 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.877560 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.877572 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.877594 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.972629 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:28:53.169744969 +0000 UTC Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.980435 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.980479 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.980492 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.980509 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.980522 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:54Z","lastTransitionTime":"2026-02-16T11:07:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.982363 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.982432 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:54 crc kubenswrapper[4797]: E0216 11:07:54.982476 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:54 crc kubenswrapper[4797]: E0216 11:07:54.982530 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.982595 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:54 crc kubenswrapper[4797]: E0216 11:07:54.982638 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.982681 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:54 crc kubenswrapper[4797]: E0216 11:07:54.982760 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:54 crc kubenswrapper[4797]: I0216 11:07:54.992608 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:54 crc kubenswrapper[4797]: E0216 11:07:54.992779 4797 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:54 crc kubenswrapper[4797]: E0216 11:07:54.992879 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs podName:1f19a4ae-a737-4818-82b5-db20cafd45c7 nodeName:}" failed. No retries permitted until 2026-02-16 11:08:26.992857934 +0000 UTC m=+101.713042984 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs") pod "network-metrics-daemon-cglwk" (UID: "1f19a4ae-a737-4818-82b5-db20cafd45c7") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.083496 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.083623 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.083636 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.083658 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.083957 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:55Z","lastTransitionTime":"2026-02-16T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.186150 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.186196 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.186206 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.186224 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.186238 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:55Z","lastTransitionTime":"2026-02-16T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.288670 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.288701 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.288710 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.288722 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.288731 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:55Z","lastTransitionTime":"2026-02-16T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.391330 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.391359 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.391368 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.391381 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.391391 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:55Z","lastTransitionTime":"2026-02-16T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.493426 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.493465 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.493476 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.493494 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.493506 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:55Z","lastTransitionTime":"2026-02-16T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.595759 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.595788 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.595797 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.595810 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.595818 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:55Z","lastTransitionTime":"2026-02-16T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.697750 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.698073 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.698161 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.698268 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.698361 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:55Z","lastTransitionTime":"2026-02-16T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.800174 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.800215 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.800226 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.800243 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.800254 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:55Z","lastTransitionTime":"2026-02-16T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.902500 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.902732 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.902845 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.902954 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.903046 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:55Z","lastTransitionTime":"2026-02-16T11:07:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.973387 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 20:20:36.864787647 +0000 UTC Feb 16 11:07:55 crc kubenswrapper[4797]: I0216 11:07:55.992333 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:55Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.002530 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.007902 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.008000 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.008014 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.008069 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.008107 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:56Z","lastTransitionTime":"2026-02-16T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.013410 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.023523 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.035313 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.048065 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.059522 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.074595 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.090025 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:40Z\\\",\\\"message\\\":\\\" crc\\\\nI0216 11:07:40.939310 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939322 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939328 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 11:07:40.939327 6482 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0216 11:07:40.939331 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.099481 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.108996 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.111221 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.111262 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.111272 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.111317 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.111332 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:56Z","lastTransitionTime":"2026-02-16T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.120195 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.129694 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.139428 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.148768 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.158328 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.168673 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:56Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.213007 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.213035 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.213045 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.213058 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.213067 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:56Z","lastTransitionTime":"2026-02-16T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.315017 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.315388 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.315507 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.315649 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.315789 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:56Z","lastTransitionTime":"2026-02-16T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.417863 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.418205 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.418332 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.418459 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.418617 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:56Z","lastTransitionTime":"2026-02-16T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.521116 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.521164 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.521174 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.521189 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.521199 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:56Z","lastTransitionTime":"2026-02-16T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.624077 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.624112 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.624121 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.624135 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.624144 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:56Z","lastTransitionTime":"2026-02-16T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.727784 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.727838 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.727851 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.727869 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.727881 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:56Z","lastTransitionTime":"2026-02-16T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.830279 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.830338 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.830353 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.830367 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.830376 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:56Z","lastTransitionTime":"2026-02-16T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.933073 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.933108 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.933117 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.933129 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.933138 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:56Z","lastTransitionTime":"2026-02-16T11:07:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.974554 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 00:33:40.206195487 +0000 UTC Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.982155 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.982253 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.982163 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:56 crc kubenswrapper[4797]: E0216 11:07:56.982328 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.982174 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:56 crc kubenswrapper[4797]: E0216 11:07:56.982505 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:56 crc kubenswrapper[4797]: E0216 11:07:56.982699 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:56 crc kubenswrapper[4797]: E0216 11:07:56.983075 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:56 crc kubenswrapper[4797]: I0216 11:07:56.983342 4797 scope.go:117] "RemoveContainer" containerID="f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c" Feb 16 11:07:56 crc kubenswrapper[4797]: E0216 11:07:56.983513 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.036351 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.036418 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.036436 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.036458 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.036474 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:57Z","lastTransitionTime":"2026-02-16T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.139546 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.139617 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.139632 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.139649 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.139661 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:57Z","lastTransitionTime":"2026-02-16T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.242310 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.242382 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.242393 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.242406 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.242416 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:57Z","lastTransitionTime":"2026-02-16T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.344728 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.344764 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.344773 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.344786 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.344795 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:57Z","lastTransitionTime":"2026-02-16T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.446772 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.446809 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.446819 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.446867 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.446879 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:57Z","lastTransitionTime":"2026-02-16T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.518564 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5qvbt_9532a098-7e41-454c-af48-44f9a9478d12/kube-multus/0.log" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.518651 4797 generic.go:334] "Generic (PLEG): container finished" podID="9532a098-7e41-454c-af48-44f9a9478d12" containerID="c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5" exitCode=1 Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.518688 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5qvbt" event={"ID":"9532a098-7e41-454c-af48-44f9a9478d12","Type":"ContainerDied","Data":"c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5"} Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.519123 4797 scope.go:117] "RemoveContainer" containerID="c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.535560 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.548948 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.548991 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.549002 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.549018 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.549031 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:57Z","lastTransitionTime":"2026-02-16T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.552948 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.566233 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.578242 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.590463 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.604696 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:57Z\\\",\\\"message\\\":\\\"2026-02-16T11:07:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a\\\\n2026-02-16T11:07:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a to /host/opt/cni/bin/\\\\n2026-02-16T11:07:12Z [verbose] multus-daemon started\\\\n2026-02-16T11:07:12Z [verbose] Readiness Indicator file check\\\\n2026-02-16T11:07:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.617352 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.627873 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.645695 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:40Z\\\",\\\"message\\\":\\\" crc\\\\nI0216 11:07:40.939310 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939322 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939328 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 11:07:40.939327 6482 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0216 11:07:40.939331 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.651826 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.652058 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.652145 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.652231 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.652310 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:57Z","lastTransitionTime":"2026-02-16T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.663740 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.673994 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.684625 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.698839 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.709613 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.726923 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.737173 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.748974 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:57Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.754443 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.754484 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.754495 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.754513 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.754525 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:57Z","lastTransitionTime":"2026-02-16T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.857174 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.857210 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.857221 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.857236 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.857248 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:57Z","lastTransitionTime":"2026-02-16T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.959487 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.959523 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.959534 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.959563 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.959596 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:57Z","lastTransitionTime":"2026-02-16T11:07:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:57 crc kubenswrapper[4797]: I0216 11:07:57.974833 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 08:00:13.261523194 +0000 UTC Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.061926 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.061982 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.061999 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.062024 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.062043 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:58Z","lastTransitionTime":"2026-02-16T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.163851 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.163894 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.163905 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.163920 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.163931 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:58Z","lastTransitionTime":"2026-02-16T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.266106 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.266129 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.266137 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.266150 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.266159 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:58Z","lastTransitionTime":"2026-02-16T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.368319 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.368363 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.368375 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.368393 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.368405 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:58Z","lastTransitionTime":"2026-02-16T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.471496 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.471536 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.471545 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.471561 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.471587 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:58Z","lastTransitionTime":"2026-02-16T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.524721 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5qvbt_9532a098-7e41-454c-af48-44f9a9478d12/kube-multus/0.log" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.524767 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5qvbt" event={"ID":"9532a098-7e41-454c-af48-44f9a9478d12","Type":"ContainerStarted","Data":"add78f37ddde7d8aaedb5783128c8f7f19f74ffe6ab10f54c85be98d5ec3bcbc"} Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.556003 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:40Z\\\",\\\"message\\\":\\\" crc\\\\nI0216 11:07:40.939310 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939322 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939328 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 11:07:40.939327 6482 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0216 11:07:40.939331 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.568797 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.573427 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.573486 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.573499 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.573515 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.573525 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:58Z","lastTransitionTime":"2026-02-16T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.580154 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.591924 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.603409 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.615029 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.624731 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.634097 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.644391 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.655292 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.665385 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.675284 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.675823 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.675851 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.675863 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.675877 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.675887 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:58Z","lastTransitionTime":"2026-02-16T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.687235 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.700061 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add78f37ddde7d8aaedb5783128c8f7f19f74ffe6ab10f54c85be98d5ec3bcbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:57Z\\\",\\\"message\\\":\\\"2026-02-16T11:07:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a\\\\n2026-02-16T11:07:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a to /host/opt/cni/bin/\\\\n2026-02-16T11:07:12Z [verbose] multus-daemon started\\\\n2026-02-16T11:07:12Z [verbose] Readiness Indicator file check\\\\n2026-02-16T11:07:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.716853 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.731306 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.745836 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:07:58Z is after 2025-08-24T17:21:41Z" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.777617 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.777659 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.777669 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.777683 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.777693 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:58Z","lastTransitionTime":"2026-02-16T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.879626 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.879667 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.879678 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.879694 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.879708 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:58Z","lastTransitionTime":"2026-02-16T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.975616 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 09:44:41.306913352 +0000 UTC Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.981736 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.981762 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.981750 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.981744 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:07:58 crc kubenswrapper[4797]: E0216 11:07:58.981970 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:07:58 crc kubenswrapper[4797]: E0216 11:07:58.981878 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:07:58 crc kubenswrapper[4797]: E0216 11:07:58.982062 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:07:58 crc kubenswrapper[4797]: E0216 11:07:58.982120 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.982457 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.982495 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.982529 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.982546 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:58 crc kubenswrapper[4797]: I0216 11:07:58.982558 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:58Z","lastTransitionTime":"2026-02-16T11:07:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.085055 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.085111 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.085130 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.085154 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.085171 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:59Z","lastTransitionTime":"2026-02-16T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.187954 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.188026 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.188048 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.188075 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.188096 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:59Z","lastTransitionTime":"2026-02-16T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.290963 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.291016 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.291039 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.291097 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.291115 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:59Z","lastTransitionTime":"2026-02-16T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.393609 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.393658 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.393669 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.393686 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.393699 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:59Z","lastTransitionTime":"2026-02-16T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.496293 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.496346 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.496358 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.496374 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.496390 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:59Z","lastTransitionTime":"2026-02-16T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.600660 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.600725 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.600742 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.600766 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.600783 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:59Z","lastTransitionTime":"2026-02-16T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.703312 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.703348 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.703361 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.703376 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.703387 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:59Z","lastTransitionTime":"2026-02-16T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.805014 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.805055 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.805069 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.805100 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.805122 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:59Z","lastTransitionTime":"2026-02-16T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.907682 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.907714 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.907724 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.907739 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.907752 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:07:59Z","lastTransitionTime":"2026-02-16T11:07:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:07:59 crc kubenswrapper[4797]: I0216 11:07:59.976651 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:37:38.503451608 +0000 UTC Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.010476 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.010541 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.010551 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.010567 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.010595 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:00Z","lastTransitionTime":"2026-02-16T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.113095 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.113130 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.113156 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.113173 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.113184 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:00Z","lastTransitionTime":"2026-02-16T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.216119 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.216153 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.216165 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.216180 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.216192 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:00Z","lastTransitionTime":"2026-02-16T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.318712 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.318762 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.318770 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.318788 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.318798 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:00Z","lastTransitionTime":"2026-02-16T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.428831 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.428876 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.428886 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.428899 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.428909 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:00Z","lastTransitionTime":"2026-02-16T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.531721 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.531802 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.531824 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.531850 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.531872 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:00Z","lastTransitionTime":"2026-02-16T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.634660 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.634815 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.634841 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.634910 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.634934 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:00Z","lastTransitionTime":"2026-02-16T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.737244 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.737339 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.737387 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.737412 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.737430 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:00Z","lastTransitionTime":"2026-02-16T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.839857 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.839900 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.839910 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.839925 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.839938 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:00Z","lastTransitionTime":"2026-02-16T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.942353 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.942396 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.942409 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.942433 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.942455 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:00Z","lastTransitionTime":"2026-02-16T11:08:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.977330 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 03:46:34.738619515 +0000 UTC Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.982701 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.982774 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.982702 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:00 crc kubenswrapper[4797]: E0216 11:08:00.982880 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:00 crc kubenswrapper[4797]: E0216 11:08:00.983032 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:00 crc kubenswrapper[4797]: E0216 11:08:00.983080 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:00 crc kubenswrapper[4797]: I0216 11:08:00.983135 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:00 crc kubenswrapper[4797]: E0216 11:08:00.983266 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.044835 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.044887 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.044901 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.044917 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.044927 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:01Z","lastTransitionTime":"2026-02-16T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.147927 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.147967 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.147978 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.147995 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.148010 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:01Z","lastTransitionTime":"2026-02-16T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.250053 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.250091 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.250101 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.250117 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.250128 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:01Z","lastTransitionTime":"2026-02-16T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.352423 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.352464 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.352478 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.352493 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.352504 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:01Z","lastTransitionTime":"2026-02-16T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.455858 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.455926 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.455956 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.455997 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.456022 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:01Z","lastTransitionTime":"2026-02-16T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.558981 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.559039 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.559048 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.559061 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.559071 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:01Z","lastTransitionTime":"2026-02-16T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.661029 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.661083 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.661097 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.661112 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.661125 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:01Z","lastTransitionTime":"2026-02-16T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.763533 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.763591 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.763601 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.763617 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.763631 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:01Z","lastTransitionTime":"2026-02-16T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.866458 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.866507 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.866519 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.866538 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.866553 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:01Z","lastTransitionTime":"2026-02-16T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.969624 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.969684 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.969715 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.969734 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.969745 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:01Z","lastTransitionTime":"2026-02-16T11:08:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:01 crc kubenswrapper[4797]: I0216 11:08:01.977954 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:47:50.75906151 +0000 UTC Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.071834 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.071869 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.071878 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.071890 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.071899 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:02Z","lastTransitionTime":"2026-02-16T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.174603 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.174652 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.174664 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.174680 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.174690 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:02Z","lastTransitionTime":"2026-02-16T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.277083 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.277123 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.277131 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.277145 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.277153 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:02Z","lastTransitionTime":"2026-02-16T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.379704 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.379782 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.379805 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.379836 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.379859 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:02Z","lastTransitionTime":"2026-02-16T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.482678 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.482716 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.482726 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.482743 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.482755 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:02Z","lastTransitionTime":"2026-02-16T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.585964 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.585995 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.586006 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.586020 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.586033 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:02Z","lastTransitionTime":"2026-02-16T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.688918 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.688956 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.688967 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.688980 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.688989 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:02Z","lastTransitionTime":"2026-02-16T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.792647 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.792735 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.792760 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.792793 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.792816 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:02Z","lastTransitionTime":"2026-02-16T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.896361 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.896426 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.896444 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.896466 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.896484 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:02Z","lastTransitionTime":"2026-02-16T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.978776 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:57:09.724791427 +0000 UTC Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.982040 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.982069 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.982121 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:02 crc kubenswrapper[4797]: E0216 11:08:02.982151 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.982177 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:02 crc kubenswrapper[4797]: E0216 11:08:02.982260 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:02 crc kubenswrapper[4797]: E0216 11:08:02.982317 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:02 crc kubenswrapper[4797]: E0216 11:08:02.982371 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.999088 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.999127 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.999137 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.999152 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:02 crc kubenswrapper[4797]: I0216 11:08:02.999165 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:02Z","lastTransitionTime":"2026-02-16T11:08:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.101274 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.101312 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.101323 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.101340 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.101351 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:03Z","lastTransitionTime":"2026-02-16T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.203695 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.203743 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.203758 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.203777 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.203789 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:03Z","lastTransitionTime":"2026-02-16T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.306191 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.306243 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.306252 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.306268 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.306277 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:03Z","lastTransitionTime":"2026-02-16T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.408774 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.408840 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.408850 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.408865 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.408874 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:03Z","lastTransitionTime":"2026-02-16T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.511309 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.511380 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.511418 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.511452 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.511475 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:03Z","lastTransitionTime":"2026-02-16T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.613628 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.613701 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.613723 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.613751 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.613774 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:03Z","lastTransitionTime":"2026-02-16T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.716618 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.716687 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.716705 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.716728 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.716745 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:03Z","lastTransitionTime":"2026-02-16T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.820431 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.820493 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.820510 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.820533 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.820552 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:03Z","lastTransitionTime":"2026-02-16T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.923601 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.923641 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.923648 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.923661 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.923670 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:03Z","lastTransitionTime":"2026-02-16T11:08:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:03 crc kubenswrapper[4797]: I0216 11:08:03.979126 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:47:22.022082112 +0000 UTC Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.026790 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.026854 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.026871 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.026895 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.026912 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.129207 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.129259 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.129272 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.129290 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.129333 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.232352 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.232391 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.232399 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.232414 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.232423 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.335195 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.335239 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.335273 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.335309 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.335320 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.438001 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.438084 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.438116 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.438137 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.438157 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.540738 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.540799 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.540810 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.540827 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.540839 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.643891 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.643937 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.643945 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.643959 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.643968 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.715637 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.715686 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.715702 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.715723 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.715739 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: E0216 11:08:04.735012 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.740438 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.740483 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.740501 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.740523 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.740538 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: E0216 11:08:04.757872 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.761672 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.761726 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.761738 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.761752 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.761766 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: E0216 11:08:04.782216 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.786910 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.786964 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.786975 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.786990 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.787002 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: E0216 11:08:04.806214 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.810111 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.810149 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.810164 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.810180 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.810191 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: E0216 11:08:04.824877 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:04Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:04 crc kubenswrapper[4797]: E0216 11:08:04.825021 4797 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.826392 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.826454 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.826467 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.826482 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.826493 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.928936 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.928983 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.929001 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.929017 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.929028 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:04Z","lastTransitionTime":"2026-02-16T11:08:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.979656 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:55:56.569470575 +0000 UTC Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.982161 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.982195 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.982290 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:04 crc kubenswrapper[4797]: E0216 11:08:04.982420 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:04 crc kubenswrapper[4797]: E0216 11:08:04.982546 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:04 crc kubenswrapper[4797]: I0216 11:08:04.982697 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:04 crc kubenswrapper[4797]: E0216 11:08:04.982759 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:04 crc kubenswrapper[4797]: E0216 11:08:04.982909 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.031461 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.031526 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.031548 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.031617 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.031643 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:05Z","lastTransitionTime":"2026-02-16T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.133547 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.133662 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.133682 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.133706 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.133724 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:05Z","lastTransitionTime":"2026-02-16T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.236627 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.236683 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.236695 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.236714 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.236730 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:05Z","lastTransitionTime":"2026-02-16T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.339021 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.339069 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.339080 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.339097 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.339109 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:05Z","lastTransitionTime":"2026-02-16T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.441954 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.442034 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.442057 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.442085 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.442110 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:05Z","lastTransitionTime":"2026-02-16T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.544538 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.544632 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.544649 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.544673 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.544696 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:05Z","lastTransitionTime":"2026-02-16T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.647519 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.647572 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.647620 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.647644 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.647659 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:05Z","lastTransitionTime":"2026-02-16T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.749676 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.749719 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.749732 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.749749 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.749761 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:05Z","lastTransitionTime":"2026-02-16T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.852477 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.852528 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.852543 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.852563 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.852600 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:05Z","lastTransitionTime":"2026-02-16T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.954809 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.954849 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.954859 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.954872 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.954882 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:05Z","lastTransitionTime":"2026-02-16T11:08:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.980653 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 09:45:44.123159634 +0000 UTC Feb 16 11:08:05 crc kubenswrapper[4797]: I0216 11:08:05.999721 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:05Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.016465 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.026298 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.036359 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.046869 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.060178 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.060245 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.060255 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.060268 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.060278 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:06Z","lastTransitionTime":"2026-02-16T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.060133 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.073153 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.086050 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.100658 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.114406 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.132829 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add78f37ddde7d8aaedb5783128c8f7f19f74ffe6ab10f54c85be98d5ec3bcbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:57Z\\\",\\\"message\\\":\\\"2026-02-16T11:07:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a\\\\n2026-02-16T11:07:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a to /host/opt/cni/bin/\\\\n2026-02-16T11:07:12Z [verbose] multus-daemon started\\\\n2026-02-16T11:07:12Z [verbose] Readiness Indicator file check\\\\n2026-02-16T11:07:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.150112 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.162293 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.162327 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.162350 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.162364 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.162374 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:06Z","lastTransitionTime":"2026-02-16T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.165114 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.179340 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.195539 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:40Z\\\",\\\"message\\\":\\\" crc\\\\nI0216 11:07:40.939310 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939322 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939328 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 11:07:40.939327 6482 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0216 11:07:40.939331 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.207230 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.218494 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:06Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.266039 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.266634 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.266812 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.266975 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.267166 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:06Z","lastTransitionTime":"2026-02-16T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.370797 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.371089 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.371155 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.371225 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.371285 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:06Z","lastTransitionTime":"2026-02-16T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.474237 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.474315 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.474331 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.474355 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.474368 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:06Z","lastTransitionTime":"2026-02-16T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.575992 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.576065 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.576085 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.576110 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.576128 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:06Z","lastTransitionTime":"2026-02-16T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.678430 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.678476 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.678488 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.678504 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.678514 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:06Z","lastTransitionTime":"2026-02-16T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.782109 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.782165 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.782185 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.782209 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.782226 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:06Z","lastTransitionTime":"2026-02-16T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.885281 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.885333 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.885350 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.885373 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.885390 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:06Z","lastTransitionTime":"2026-02-16T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.981614 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:17:43.679036807 +0000 UTC Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.981705 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:06 crc kubenswrapper[4797]: E0216 11:08:06.982401 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.981745 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:06 crc kubenswrapper[4797]: E0216 11:08:06.982643 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.981746 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:06 crc kubenswrapper[4797]: E0216 11:08:06.982836 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.981813 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:06 crc kubenswrapper[4797]: E0216 11:08:06.983043 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.987540 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.987630 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.987648 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.987671 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:06 crc kubenswrapper[4797]: I0216 11:08:06.987692 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:06Z","lastTransitionTime":"2026-02-16T11:08:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.090220 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.090269 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.090283 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.090301 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.090313 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:07Z","lastTransitionTime":"2026-02-16T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.193082 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.193380 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.193446 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.193518 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.193603 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:07Z","lastTransitionTime":"2026-02-16T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.295544 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.295888 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.295984 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.296072 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.296160 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:07Z","lastTransitionTime":"2026-02-16T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.399351 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.399629 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.399718 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.399842 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.399948 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:07Z","lastTransitionTime":"2026-02-16T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.502820 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.502873 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.502891 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.502910 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.502924 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:07Z","lastTransitionTime":"2026-02-16T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.605706 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.605800 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.605814 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.605859 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.605875 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:07Z","lastTransitionTime":"2026-02-16T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.708514 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.708537 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.708546 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.708558 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.708567 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:07Z","lastTransitionTime":"2026-02-16T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.811440 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.811752 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.811819 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.811885 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.811956 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:07Z","lastTransitionTime":"2026-02-16T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.914617 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.914669 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.914687 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.914736 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.914754 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:07Z","lastTransitionTime":"2026-02-16T11:08:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.982512 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 07:15:59.317465247 +0000 UTC Feb 16 11:08:07 crc kubenswrapper[4797]: I0216 11:08:07.983331 4797 scope.go:117] "RemoveContainer" containerID="f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.016776 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.016820 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.016831 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.016851 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.016869 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:08Z","lastTransitionTime":"2026-02-16T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.118722 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.119092 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.119102 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.119118 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.119132 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:08Z","lastTransitionTime":"2026-02-16T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.221931 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.221977 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.221986 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.222000 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.222009 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:08Z","lastTransitionTime":"2026-02-16T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.325351 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.325726 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.325824 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.325925 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.326026 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:08Z","lastTransitionTime":"2026-02-16T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.429301 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.429859 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.429933 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.429967 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.429990 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:08Z","lastTransitionTime":"2026-02-16T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.532454 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.532496 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.532506 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.532521 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.532531 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:08Z","lastTransitionTime":"2026-02-16T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.634993 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.635051 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.635070 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.635092 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.635107 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:08Z","lastTransitionTime":"2026-02-16T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.737558 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.737631 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.737644 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.737660 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.737672 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:08Z","lastTransitionTime":"2026-02-16T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.839737 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.839776 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.839787 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.839803 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.839815 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:08Z","lastTransitionTime":"2026-02-16T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.941981 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.942014 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.942023 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.942035 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.942044 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:08Z","lastTransitionTime":"2026-02-16T11:08:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.982056 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.982079 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.982060 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.982154 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:08 crc kubenswrapper[4797]: E0216 11:08:08.982250 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:08 crc kubenswrapper[4797]: E0216 11:08:08.982307 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:08 crc kubenswrapper[4797]: E0216 11:08:08.982458 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:08 crc kubenswrapper[4797]: E0216 11:08:08.982499 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.982798 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:59:51.384964983 +0000 UTC Feb 16 11:08:08 crc kubenswrapper[4797]: I0216 11:08:08.991471 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.044373 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.044406 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.044414 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.044428 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.044438 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:09Z","lastTransitionTime":"2026-02-16T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.147601 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.147640 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.147650 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.147666 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.147678 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:09Z","lastTransitionTime":"2026-02-16T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.250140 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.250183 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.250193 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.250207 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.250218 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:09Z","lastTransitionTime":"2026-02-16T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.355279 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.355319 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.355330 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.355344 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.355356 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:09Z","lastTransitionTime":"2026-02-16T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.457479 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.457538 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.457550 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.457569 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.457605 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:09Z","lastTransitionTime":"2026-02-16T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.561220 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/2.log" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.563390 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.563622 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.563765 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.563910 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.564102 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:09Z","lastTransitionTime":"2026-02-16T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.565867 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerStarted","Data":"092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b"} Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.667386 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.667992 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.668105 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.668201 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.668280 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:09Z","lastTransitionTime":"2026-02-16T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.770911 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.770989 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.771015 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.771043 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.771064 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:09Z","lastTransitionTime":"2026-02-16T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.874378 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.874469 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.874498 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.874529 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.874554 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:09Z","lastTransitionTime":"2026-02-16T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.977384 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.977435 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.977445 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.977458 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.977470 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:09Z","lastTransitionTime":"2026-02-16T11:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:09 crc kubenswrapper[4797]: I0216 11:08:09.983025 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:30:39.350682917 +0000 UTC Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.079756 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.079814 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.079833 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.079862 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.079883 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:10Z","lastTransitionTime":"2026-02-16T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.182346 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.182388 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.182398 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.182413 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.182425 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:10Z","lastTransitionTime":"2026-02-16T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.307194 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.307227 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.307235 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.307248 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.307256 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:10Z","lastTransitionTime":"2026-02-16T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.410200 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.410525 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.419891 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.419941 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.419956 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:10Z","lastTransitionTime":"2026-02-16T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.522619 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.522666 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.522676 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.522693 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.522707 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:10Z","lastTransitionTime":"2026-02-16T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.626024 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.626074 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.626085 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.626105 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.626118 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:10Z","lastTransitionTime":"2026-02-16T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.728085 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.728121 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.728132 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.728149 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.728164 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:10Z","lastTransitionTime":"2026-02-16T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.830054 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.830092 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.830100 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.830113 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.830123 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:10Z","lastTransitionTime":"2026-02-16T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.838428 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.838598 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.838561493 +0000 UTC m=+149.558746473 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.838663 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.838815 4797 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.838868 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.838858411 +0000 UTC m=+149.559043391 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.932793 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.932828 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.932838 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.932852 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.932868 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:10Z","lastTransitionTime":"2026-02-16T11:08:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.939742 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.939859 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.939919 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.939930 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.939958 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.939969 4797 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.940006 4797 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.940048 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.940067 4797 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.940016 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.940001616 +0000 UTC m=+149.660186596 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.940079 4797 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.940096 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.940081578 +0000 UTC m=+149.660266558 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.940116 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.940101719 +0000 UTC m=+149.660286789 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.982483 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.982531 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.982612 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.982679 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.982804 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.982927 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.983137 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 14:32:23.288973655 +0000 UTC Feb 16 11:08:10 crc kubenswrapper[4797]: I0216 11:08:10.983259 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:10 crc kubenswrapper[4797]: E0216 11:08:10.983365 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.035719 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.035774 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.035785 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.035798 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.035809 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:11Z","lastTransitionTime":"2026-02-16T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.138377 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.138418 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.138426 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.138441 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.138450 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:11Z","lastTransitionTime":"2026-02-16T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.240759 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.240797 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.240814 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.240836 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.240856 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:11Z","lastTransitionTime":"2026-02-16T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.344687 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.344762 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.344781 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.344809 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.344837 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:11Z","lastTransitionTime":"2026-02-16T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.446990 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.447048 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.447066 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.447212 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.447239 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:11Z","lastTransitionTime":"2026-02-16T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.550129 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.550183 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.550199 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.550220 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.550240 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:11Z","lastTransitionTime":"2026-02-16T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.576056 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/3.log" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.576957 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/2.log" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.581169 4797 generic.go:334] "Generic (PLEG): container finished" podID="812f1f08-469d-44f4-907e-60ad61837364" containerID="092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b" exitCode=1 Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.581238 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b"} Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.581301 4797 scope.go:117] "RemoveContainer" containerID="f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.581838 4797 scope.go:117] "RemoveContainer" containerID="092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b" Feb 16 11:08:11 crc kubenswrapper[4797]: E0216 11:08:11.582062 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.600985 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add78f37ddde7d8aaedb5783128c8f7f19f74ffe6ab10f54c85be98d5ec3bcbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:57Z\\\",\\\"message\\\":\\\"2026-02-16T11:07:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a\\\\n2026-02-16T11:07:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a to /host/opt/cni/bin/\\\\n2026-02-16T11:07:12Z [verbose] multus-daemon started\\\\n2026-02-16T11:07:12Z [verbose] Readiness Indicator file check\\\\n2026-02-16T11:07:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.617429 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.637079 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.652997 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.653043 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.653061 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.653103 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.653121 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:11Z","lastTransitionTime":"2026-02-16T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.655320 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.671801 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.686007 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.700429 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.712777 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.737101 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f381f90c734a01fcaba5ed345b87779b9bf39c0339a85e6a76191204bc095d2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:40Z\\\",\\\"message\\\":\\\" crc\\\\nI0216 11:07:40.939310 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939322 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-cglwk\\\\nI0216 11:07:40.939328 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI0216 11:07:40.939327 6482 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nF0216 11:07:40.939331 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:08:11Z\\\",\\\"message\\\":\\\"work=default : 12.093601ms\\\\nI0216 11:08:11.265553 6836 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0216 11:08:11.265982 6836 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0216 11:08:11.266439 6836 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 11:08:11.266478 6836 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 11:08:11.266515 6836 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 11:08:11.266514 6836 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 11:08:11.266538 6836 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 11:08:11.266552 6836 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 11:08:11.266561 6836 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 11:08:11.266631 6836 factory.go:656] Stopping watch factory\\\\nI0216 11:08:11.266639 6836 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 11:08:11.266660 6836 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 11:08:11.266662 6836 ovnkube.go:599] Stopped ovnkube\\\\nI0216 11:08:11.266683 6836 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 11:08:11.266695 6836 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 11:08:11.266770 6836 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:08:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.750983 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.755205 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.755236 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.755250 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.755272 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.755287 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:11Z","lastTransitionTime":"2026-02-16T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.763337 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.776526 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.789625 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.801205 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.809610 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80704342-8cf6-432d-a729-c9ed85d25843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03ac68651e6f65e2295acfcc538003af7c162a7fb76761c3e28d3b15e1c0c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44a41fc51d7bbc1283bb9896ce89b415267374405ca087fc40fd8f80fbae4cc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44a41fc51d7bbc1283bb9896ce89b415267374405ca087fc40fd8f80fbae4cc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.819787 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.856953 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.856988 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.857000 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.857015 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.857027 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:11Z","lastTransitionTime":"2026-02-16T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.861002 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.878948 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:11Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.958810 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.958859 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.958872 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.958891 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.958905 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:11Z","lastTransitionTime":"2026-02-16T11:08:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:11 crc kubenswrapper[4797]: I0216 11:08:11.983508 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 13:46:39.687095901 +0000 UTC Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.060720 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.060763 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.060773 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.060790 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.060802 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:12Z","lastTransitionTime":"2026-02-16T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.163793 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.163872 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.163896 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.163921 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.163938 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:12Z","lastTransitionTime":"2026-02-16T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.266950 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.267007 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.267023 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.267048 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.267077 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:12Z","lastTransitionTime":"2026-02-16T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.369451 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.369489 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.369498 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.369512 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.369522 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:12Z","lastTransitionTime":"2026-02-16T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.472361 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.472432 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.472453 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.472481 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.472503 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:12Z","lastTransitionTime":"2026-02-16T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.576122 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.576184 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.576203 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.576231 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.576253 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:12Z","lastTransitionTime":"2026-02-16T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.586603 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/3.log" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.592563 4797 scope.go:117] "RemoveContainer" containerID="092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b" Feb 16 11:08:12 crc kubenswrapper[4797]: E0216 11:08:12.592902 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.609820 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.632852 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.652214 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.669671 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.679377 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.679442 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.679459 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.679488 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.679508 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:12Z","lastTransitionTime":"2026-02-16T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.689736 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add78f37ddde7d8aaedb5783128c8f7f19f74ffe6ab10f54c85be98d5ec3bcbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:57Z\\\",\\\"message\\\":\\\"2026-02-16T11:07:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a\\\\n2026-02-16T11:07:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a to /host/opt/cni/bin/\\\\n2026-02-16T11:07:12Z [verbose] multus-daemon started\\\\n2026-02-16T11:07:12Z [verbose] Readiness Indicator file check\\\\n2026-02-16T11:07:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.714027 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.735825 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.754320 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.782713 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.782758 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.782769 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.782787 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.782799 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:12Z","lastTransitionTime":"2026-02-16T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.787320 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:08:11Z\\\",\\\"message\\\":\\\"work=default : 12.093601ms\\\\nI0216 11:08:11.265553 6836 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0216 11:08:11.265982 6836 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0216 11:08:11.266439 6836 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 11:08:11.266478 6836 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 11:08:11.266515 6836 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 11:08:11.266514 6836 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 11:08:11.266538 6836 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 11:08:11.266552 6836 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 11:08:11.266561 6836 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 11:08:11.266631 6836 factory.go:656] Stopping watch factory\\\\nI0216 11:08:11.266639 6836 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 11:08:11.266660 6836 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 11:08:11.266662 6836 ovnkube.go:599] Stopped ovnkube\\\\nI0216 11:08:11.266683 6836 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 11:08:11.266695 6836 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 11:08:11.266770 6836 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:08:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.805498 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.823353 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.844757 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.857981 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80704342-8cf6-432d-a729-c9ed85d25843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03ac68651e6f65e2295acfcc538003af7c162a7fb76761c3e28d3b15e1c0c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44a41fc51d7bbc1283bb9896ce89b415267374405ca087fc40fd8f80fbae4cc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44a41fc51d7bbc1283bb9896ce89b415267374405ca087fc40fd8f80fbae4cc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.872682 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.885174 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.885214 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.885225 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.885240 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.885251 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:12Z","lastTransitionTime":"2026-02-16T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.888871 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.902716 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.914785 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.924943 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:12Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.982648 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.982693 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.982702 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:12 crc kubenswrapper[4797]: E0216 11:08:12.982755 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:12 crc kubenswrapper[4797]: E0216 11:08:12.982834 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.982866 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:12 crc kubenswrapper[4797]: E0216 11:08:12.983008 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:12 crc kubenswrapper[4797]: E0216 11:08:12.983375 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.983689 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 16:57:00.297380825 +0000 UTC Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.987296 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.987358 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.987383 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.987409 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:12 crc kubenswrapper[4797]: I0216 11:08:12.987435 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:12Z","lastTransitionTime":"2026-02-16T11:08:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.089674 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.089715 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.089724 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.089739 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.089748 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:13Z","lastTransitionTime":"2026-02-16T11:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.192978 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.193021 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.193033 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.193049 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.193061 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:13Z","lastTransitionTime":"2026-02-16T11:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.295827 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.295877 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.295888 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.295902 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.295912 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:13Z","lastTransitionTime":"2026-02-16T11:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.398318 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.398367 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.398382 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.398409 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.398424 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:13Z","lastTransitionTime":"2026-02-16T11:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.500788 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.500853 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.500871 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.500892 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.500909 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:13Z","lastTransitionTime":"2026-02-16T11:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.604246 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.604304 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.604320 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.604349 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.604374 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:13Z","lastTransitionTime":"2026-02-16T11:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.706778 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.706815 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.706828 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.706843 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.706856 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:13Z","lastTransitionTime":"2026-02-16T11:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.809182 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.809238 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.809259 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.809281 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.809298 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:13Z","lastTransitionTime":"2026-02-16T11:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.912117 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.912154 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.912162 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.912177 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.912187 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:13Z","lastTransitionTime":"2026-02-16T11:08:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:13 crc kubenswrapper[4797]: I0216 11:08:13.984339 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 03:49:14.35502618 +0000 UTC Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.014864 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.014937 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.014961 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.014991 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.015008 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:14Z","lastTransitionTime":"2026-02-16T11:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.116815 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.116857 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.116867 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.116882 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.116893 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:14Z","lastTransitionTime":"2026-02-16T11:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.219845 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.219896 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.219905 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.219920 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.219933 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:14Z","lastTransitionTime":"2026-02-16T11:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.321810 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.321844 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.321853 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.321866 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.321875 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:14Z","lastTransitionTime":"2026-02-16T11:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.424871 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.425111 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.425205 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.425270 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.425333 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:14Z","lastTransitionTime":"2026-02-16T11:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.527629 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.527863 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.527924 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.528052 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.528120 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:14Z","lastTransitionTime":"2026-02-16T11:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.631055 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.631310 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.631388 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.631456 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.631517 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:14Z","lastTransitionTime":"2026-02-16T11:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.734101 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.734363 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.734429 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.734508 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.734640 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:14Z","lastTransitionTime":"2026-02-16T11:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.836802 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.836901 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.836940 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.836956 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.836965 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:14Z","lastTransitionTime":"2026-02-16T11:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.940547 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.940640 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.940659 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.940681 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.940698 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:14Z","lastTransitionTime":"2026-02-16T11:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.982627 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.982731 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.982679 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.982655 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:14 crc kubenswrapper[4797]: E0216 11:08:14.982955 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:14 crc kubenswrapper[4797]: E0216 11:08:14.983088 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:14 crc kubenswrapper[4797]: E0216 11:08:14.983224 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:14 crc kubenswrapper[4797]: E0216 11:08:14.983399 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.984957 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 13:02:33.886202316 +0000 UTC Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.988822 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.988880 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.988905 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.988931 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:14 crc kubenswrapper[4797]: I0216 11:08:14.988953 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:14Z","lastTransitionTime":"2026-02-16T11:08:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: E0216 11:08:15.015786 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.020206 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.020262 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.020274 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.020293 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.020304 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:15Z","lastTransitionTime":"2026-02-16T11:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: E0216 11:08:15.038807 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.043794 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.043848 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.043866 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.043889 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.043907 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:15Z","lastTransitionTime":"2026-02-16T11:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: E0216 11:08:15.063710 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.068937 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.068993 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.069010 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.069032 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.069049 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:15Z","lastTransitionTime":"2026-02-16T11:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: E0216 11:08:15.087801 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.092627 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.092675 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.092687 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.092704 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.092717 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:15Z","lastTransitionTime":"2026-02-16T11:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: E0216 11:08:15.113030 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:15 crc kubenswrapper[4797]: E0216 11:08:15.113261 4797 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.115058 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.115113 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.115130 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.115152 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.115169 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:15Z","lastTransitionTime":"2026-02-16T11:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.217735 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.217765 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.217772 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.217786 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.217794 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:15Z","lastTransitionTime":"2026-02-16T11:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.320350 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.320624 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.320688 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.320756 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.320811 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:15Z","lastTransitionTime":"2026-02-16T11:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.423861 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.423984 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.424088 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.424117 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.424152 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:15Z","lastTransitionTime":"2026-02-16T11:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.526847 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.526897 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.526913 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.526936 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.526956 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:15Z","lastTransitionTime":"2026-02-16T11:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.630124 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.630194 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.630214 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.630241 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.630261 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:15Z","lastTransitionTime":"2026-02-16T11:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.733408 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.733452 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.733466 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.733486 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.733502 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:15Z","lastTransitionTime":"2026-02-16T11:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.836678 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.836857 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.836888 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.836988 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.837024 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:15Z","lastTransitionTime":"2026-02-16T11:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.940069 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.940141 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.940163 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.940195 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.940220 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:15Z","lastTransitionTime":"2026-02-16T11:08:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.985695 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:28:49.42860498 +0000 UTC Feb 16 11:08:15 crc kubenswrapper[4797]: I0216 11:08:15.996641 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:15Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.008105 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.021206 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.035200 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.043412 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.043446 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.043455 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.043468 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.043478 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:16Z","lastTransitionTime":"2026-02-16T11:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.047375 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.061929 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.075882 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80704342-8cf6-432d-a729-c9ed85d25843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03ac68651e6f65e2295acfcc538003af7c162a7fb76761c3e28d3b15e1c0c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44a41fc51d7bbc1283bb9896ce89b415267374405ca087fc40fd8f80fbae4cc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44a41fc51d7bbc1283bb9896ce89b415267374405ca087fc40fd8f80fbae4cc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.089136 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.100207 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.117450 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.127633 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.138369 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add78f37ddde7d8aaedb5783128c8f7f19f74ffe6ab10f54c85be98d5ec3bcbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:57Z\\\",\\\"message\\\":\\\"2026-02-16T11:07:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a\\\\n2026-02-16T11:07:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a to /host/opt/cni/bin/\\\\n2026-02-16T11:07:12Z [verbose] multus-daemon started\\\\n2026-02-16T11:07:12Z [verbose] Readiness Indicator file check\\\\n2026-02-16T11:07:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.146905 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.146957 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.147037 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.147061 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.147489 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:16Z","lastTransitionTime":"2026-02-16T11:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.150805 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.163099 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.176399 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.193084 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:08:11Z\\\",\\\"message\\\":\\\"work=default : 12.093601ms\\\\nI0216 11:08:11.265553 6836 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0216 11:08:11.265982 6836 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0216 11:08:11.266439 6836 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 11:08:11.266478 6836 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 11:08:11.266515 6836 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 11:08:11.266514 6836 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 11:08:11.266538 6836 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 11:08:11.266552 6836 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 11:08:11.266561 6836 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 11:08:11.266631 6836 factory.go:656] Stopping watch factory\\\\nI0216 11:08:11.266639 6836 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 11:08:11.266660 6836 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 11:08:11.266662 6836 ovnkube.go:599] Stopped ovnkube\\\\nI0216 11:08:11.266683 6836 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 11:08:11.266695 6836 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 11:08:11.266770 6836 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:08:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.203903 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.213347 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:16Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.250328 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.250378 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.250389 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.250405 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.250418 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:16Z","lastTransitionTime":"2026-02-16T11:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.352391 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.352434 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.352452 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.352473 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.352488 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:16Z","lastTransitionTime":"2026-02-16T11:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.454304 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.454343 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.454352 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.454367 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.454379 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:16Z","lastTransitionTime":"2026-02-16T11:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.556915 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.556972 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.556992 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.557012 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.557026 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:16Z","lastTransitionTime":"2026-02-16T11:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.659694 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.659734 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.659741 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.659755 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.659764 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:16Z","lastTransitionTime":"2026-02-16T11:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.762038 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.762092 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.762103 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.762116 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.762126 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:16Z","lastTransitionTime":"2026-02-16T11:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.864145 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.864186 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.864199 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.864215 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.864226 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:16Z","lastTransitionTime":"2026-02-16T11:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.967180 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.967220 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.967228 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.967241 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.967250 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:16Z","lastTransitionTime":"2026-02-16T11:08:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.982670 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.982725 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.982707 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:16 crc kubenswrapper[4797]: E0216 11:08:16.982780 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.982803 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:16 crc kubenswrapper[4797]: E0216 11:08:16.982906 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:16 crc kubenswrapper[4797]: E0216 11:08:16.982920 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:16 crc kubenswrapper[4797]: E0216 11:08:16.982968 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.985849 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 20:25:44.757478864 +0000 UTC Feb 16 11:08:16 crc kubenswrapper[4797]: I0216 11:08:16.997605 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.069646 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.069686 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.069696 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.069711 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.069722 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:17Z","lastTransitionTime":"2026-02-16T11:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.172434 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.172466 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.172475 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.172486 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.172494 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:17Z","lastTransitionTime":"2026-02-16T11:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.274977 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.275029 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.275040 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.275074 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.275086 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:17Z","lastTransitionTime":"2026-02-16T11:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.380877 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.381174 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.381549 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.381620 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.381659 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:17Z","lastTransitionTime":"2026-02-16T11:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.484957 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.485298 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.485312 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.485329 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.485341 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:17Z","lastTransitionTime":"2026-02-16T11:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.588067 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.588101 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.588110 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.588124 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.588134 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:17Z","lastTransitionTime":"2026-02-16T11:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.690933 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.690976 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.690984 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.690998 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.691007 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:17Z","lastTransitionTime":"2026-02-16T11:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.793078 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.793124 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.793136 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.793153 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.793166 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:17Z","lastTransitionTime":"2026-02-16T11:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.896235 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.896302 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.896320 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.896345 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.896364 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:17Z","lastTransitionTime":"2026-02-16T11:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.986120 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:38:45.331717776 +0000 UTC Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.998000 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.998042 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.998057 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.998071 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:17 crc kubenswrapper[4797]: I0216 11:08:17.998082 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:17Z","lastTransitionTime":"2026-02-16T11:08:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.100842 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.100916 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.100937 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.100963 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.100980 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:18Z","lastTransitionTime":"2026-02-16T11:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.203320 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.203646 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.203753 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.203827 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.203892 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:18Z","lastTransitionTime":"2026-02-16T11:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.306639 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.306901 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.306984 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.307071 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.307152 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:18Z","lastTransitionTime":"2026-02-16T11:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.408915 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.408955 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.408965 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.408977 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.408986 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:18Z","lastTransitionTime":"2026-02-16T11:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.512116 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.512184 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.512203 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.512228 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.512245 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:18Z","lastTransitionTime":"2026-02-16T11:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.614533 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.614599 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.614613 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.614631 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.614644 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:18Z","lastTransitionTime":"2026-02-16T11:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.717622 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.717695 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.717719 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.717749 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.717773 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:18Z","lastTransitionTime":"2026-02-16T11:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.820208 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.820245 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.820255 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.820281 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.820293 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:18Z","lastTransitionTime":"2026-02-16T11:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.922940 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.923011 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.923029 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.923055 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.923073 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:18Z","lastTransitionTime":"2026-02-16T11:08:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.982078 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.982138 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.982168 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.982097 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:18 crc kubenswrapper[4797]: E0216 11:08:18.982315 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:18 crc kubenswrapper[4797]: E0216 11:08:18.982417 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:18 crc kubenswrapper[4797]: E0216 11:08:18.982645 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:18 crc kubenswrapper[4797]: E0216 11:08:18.982787 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:18 crc kubenswrapper[4797]: I0216 11:08:18.987068 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:29:34.326389413 +0000 UTC Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.026265 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.026375 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.026395 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.026420 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.026436 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:19Z","lastTransitionTime":"2026-02-16T11:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.129043 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.129089 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.129101 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.129120 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.129135 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:19Z","lastTransitionTime":"2026-02-16T11:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.232085 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.232146 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.232163 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.232186 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.232205 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:19Z","lastTransitionTime":"2026-02-16T11:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.334980 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.335014 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.335022 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.335034 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.335043 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:19Z","lastTransitionTime":"2026-02-16T11:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.438127 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.438191 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.438218 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.438244 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.438260 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:19Z","lastTransitionTime":"2026-02-16T11:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.541187 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.541227 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.541238 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.541284 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.541296 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:19Z","lastTransitionTime":"2026-02-16T11:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.643247 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.643291 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.643301 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.643319 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.643333 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:19Z","lastTransitionTime":"2026-02-16T11:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.746269 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.746312 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.746320 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.746337 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.746348 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:19Z","lastTransitionTime":"2026-02-16T11:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.849187 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.849267 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.849285 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.849321 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.849354 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:19Z","lastTransitionTime":"2026-02-16T11:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.953333 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.953430 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.953441 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.953459 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.953471 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:19Z","lastTransitionTime":"2026-02-16T11:08:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:19 crc kubenswrapper[4797]: I0216 11:08:19.987172 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 05:37:57.397343884 +0000 UTC Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.055819 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.055884 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.055907 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.055936 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.055959 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:20Z","lastTransitionTime":"2026-02-16T11:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.159101 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.159162 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.159181 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.159203 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.159221 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:20Z","lastTransitionTime":"2026-02-16T11:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.261789 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.261854 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.261872 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.261897 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.261915 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:20Z","lastTransitionTime":"2026-02-16T11:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.364414 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.364473 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.364488 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.364510 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.364525 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:20Z","lastTransitionTime":"2026-02-16T11:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.467918 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.467990 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.468024 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.468053 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.468075 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:20Z","lastTransitionTime":"2026-02-16T11:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.571499 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.571572 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.571638 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.571666 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.571687 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:20Z","lastTransitionTime":"2026-02-16T11:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.674193 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.674263 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.674285 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.674312 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.674337 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:20Z","lastTransitionTime":"2026-02-16T11:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.777324 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.777392 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.777410 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.777435 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.777455 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:20Z","lastTransitionTime":"2026-02-16T11:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.881677 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.881758 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.881784 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.881811 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.881835 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:20Z","lastTransitionTime":"2026-02-16T11:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.982648 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:20 crc kubenswrapper[4797]: E0216 11:08:20.982823 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.982869 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:20 crc kubenswrapper[4797]: E0216 11:08:20.983085 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.983122 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:20 crc kubenswrapper[4797]: E0216 11:08:20.983473 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.983957 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:20 crc kubenswrapper[4797]: E0216 11:08:20.984215 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.985616 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.985706 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.985720 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.985759 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.985773 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:20Z","lastTransitionTime":"2026-02-16T11:08:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:20 crc kubenswrapper[4797]: I0216 11:08:20.988143 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 15:00:28.598573093 +0000 UTC Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.088166 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.088220 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.088232 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.088249 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.088264 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:21Z","lastTransitionTime":"2026-02-16T11:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.191003 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.191056 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.191067 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.191083 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.191093 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:21Z","lastTransitionTime":"2026-02-16T11:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.295078 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.295129 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.295138 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.295154 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.295166 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:21Z","lastTransitionTime":"2026-02-16T11:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.422236 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.422298 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.422309 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.422330 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.422342 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:21Z","lastTransitionTime":"2026-02-16T11:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.524736 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.524800 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.524817 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.524837 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.524856 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:21Z","lastTransitionTime":"2026-02-16T11:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.627715 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.627780 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.627794 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.627814 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.627825 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:21Z","lastTransitionTime":"2026-02-16T11:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.731017 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.731062 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.731070 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.731087 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.731095 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:21Z","lastTransitionTime":"2026-02-16T11:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.834400 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.834501 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.834527 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.834558 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.834619 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:21Z","lastTransitionTime":"2026-02-16T11:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.937331 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.937375 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.937384 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.937397 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.937406 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:21Z","lastTransitionTime":"2026-02-16T11:08:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:21 crc kubenswrapper[4797]: I0216 11:08:21.988602 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 11:37:34.880603834 +0000 UTC Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.041098 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.041163 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.041175 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.041205 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.041217 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:22Z","lastTransitionTime":"2026-02-16T11:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.146834 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.146910 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.146933 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.146965 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.146988 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:22Z","lastTransitionTime":"2026-02-16T11:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.250038 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.250066 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.250076 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.250088 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.250099 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:22Z","lastTransitionTime":"2026-02-16T11:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.352210 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.352319 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.352329 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.352345 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.352355 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:22Z","lastTransitionTime":"2026-02-16T11:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.454838 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.454879 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.454890 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.454907 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.454918 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:22Z","lastTransitionTime":"2026-02-16T11:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.557835 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.557915 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.557939 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.557970 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.557992 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:22Z","lastTransitionTime":"2026-02-16T11:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.661106 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.661150 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.661161 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.661179 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.661192 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:22Z","lastTransitionTime":"2026-02-16T11:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.763120 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.763156 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.763165 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.763177 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.763186 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:22Z","lastTransitionTime":"2026-02-16T11:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.864894 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.864964 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.864987 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.865015 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.865040 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:22Z","lastTransitionTime":"2026-02-16T11:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.967750 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.967793 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.967803 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.967819 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.967830 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:22Z","lastTransitionTime":"2026-02-16T11:08:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.982482 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.982542 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.982653 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:22 crc kubenswrapper[4797]: E0216 11:08:22.982729 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.982745 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:22 crc kubenswrapper[4797]: E0216 11:08:22.983123 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:22 crc kubenswrapper[4797]: E0216 11:08:22.983197 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:22 crc kubenswrapper[4797]: E0216 11:08:22.983398 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:22 crc kubenswrapper[4797]: I0216 11:08:22.989154 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:04:50.714654338 +0000 UTC Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.070274 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.070317 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.070355 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.070371 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.070383 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:23Z","lastTransitionTime":"2026-02-16T11:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.173251 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.173300 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.173311 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.173327 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.173339 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:23Z","lastTransitionTime":"2026-02-16T11:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.276086 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.276156 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.276165 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.276180 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.276189 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:23Z","lastTransitionTime":"2026-02-16T11:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.380145 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.380184 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.380195 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.380211 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.380223 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:23Z","lastTransitionTime":"2026-02-16T11:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.484115 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.484175 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.484191 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.484212 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.484229 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:23Z","lastTransitionTime":"2026-02-16T11:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.586875 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.587206 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.587298 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.587384 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.587481 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:23Z","lastTransitionTime":"2026-02-16T11:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.690412 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.690842 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.691016 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.691157 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.691289 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:23Z","lastTransitionTime":"2026-02-16T11:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.794083 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.794799 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.794837 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.794863 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.794879 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:23Z","lastTransitionTime":"2026-02-16T11:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.897104 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.897512 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.897797 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.898017 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.898211 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:23Z","lastTransitionTime":"2026-02-16T11:08:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.983763 4797 scope.go:117] "RemoveContainer" containerID="092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b" Feb 16 11:08:23 crc kubenswrapper[4797]: E0216 11:08:23.984066 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" Feb 16 11:08:23 crc kubenswrapper[4797]: I0216 11:08:23.989748 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:20:15.182632571 +0000 UTC Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.000818 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.000876 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.000894 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.000918 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.000936 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:24Z","lastTransitionTime":"2026-02-16T11:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.103815 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.103866 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.103880 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.103900 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.103914 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:24Z","lastTransitionTime":"2026-02-16T11:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.206619 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.206685 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.206706 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.206735 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.206757 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:24Z","lastTransitionTime":"2026-02-16T11:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.308760 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.308814 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.308840 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.308868 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.308888 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:24Z","lastTransitionTime":"2026-02-16T11:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.412542 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.412628 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.412645 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.412667 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.412685 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:24Z","lastTransitionTime":"2026-02-16T11:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.515793 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.515854 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.515870 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.515893 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.515908 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:24Z","lastTransitionTime":"2026-02-16T11:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.618823 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.618932 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.618959 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.618988 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.619007 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:24Z","lastTransitionTime":"2026-02-16T11:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.721336 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.721400 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.721410 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.721422 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.721431 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:24Z","lastTransitionTime":"2026-02-16T11:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.824231 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.824313 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.824329 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.824350 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.824365 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:24Z","lastTransitionTime":"2026-02-16T11:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.927635 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.927698 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.927706 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.927718 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.927726 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:24Z","lastTransitionTime":"2026-02-16T11:08:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.981951 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.981970 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.982035 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:24 crc kubenswrapper[4797]: E0216 11:08:24.982170 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.982237 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:24 crc kubenswrapper[4797]: E0216 11:08:24.982336 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:24 crc kubenswrapper[4797]: E0216 11:08:24.982426 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:24 crc kubenswrapper[4797]: E0216 11:08:24.982518 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:24 crc kubenswrapper[4797]: I0216 11:08:24.990367 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 08:38:42.254195263 +0000 UTC Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.031159 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.031216 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.031235 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.031260 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.031279 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.134016 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.134103 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.134118 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.134136 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.134171 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.236512 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.236562 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.236610 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.236640 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.236660 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.283558 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.284675 4797 scope.go:117] "RemoveContainer" containerID="092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b" Feb 16 11:08:25 crc kubenswrapper[4797]: E0216 11:08:25.284886 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.339348 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.339413 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.339430 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.339448 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.339459 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.415753 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.415802 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.415818 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.415840 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.415855 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: E0216 11:08:25.431330 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.436523 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.436564 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.436593 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.436611 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.436625 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: E0216 11:08:25.461222 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.466648 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.466702 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.466713 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.466731 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.466744 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: E0216 11:08:25.482906 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.488265 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.488310 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.488323 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.488342 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.488355 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: E0216 11:08:25.509853 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.514451 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.514487 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.514494 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.514509 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.514520 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: E0216 11:08:25.530535 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:25 crc kubenswrapper[4797]: E0216 11:08:25.530687 4797 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.532778 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.532820 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.532830 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.532846 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.532858 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.635487 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.635522 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.635531 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.635545 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.635556 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.738600 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.738946 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.739081 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.739210 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.739336 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.843086 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.843159 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.843183 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.843217 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.843246 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.946719 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.946765 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.946778 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.946795 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.946808 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:25Z","lastTransitionTime":"2026-02-16T11:08:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:25 crc kubenswrapper[4797]: I0216 11:08:25.990911 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 18:02:06.950952633 +0000 UTC Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.001786 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:25Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.018565 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.040088 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.051465 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.051544 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.051608 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.051648 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.051675 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:26Z","lastTransitionTime":"2026-02-16T11:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.062943 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.083235 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add78f37ddde7d8aaedb5783128c8f7f19f74ffe6ab10f54c85be98d5ec3bcbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:57Z\\\",\\\"message\\\":\\\"2026-02-16T11:07:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a\\\\n2026-02-16T11:07:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a to /host/opt/cni/bin/\\\\n2026-02-16T11:07:12Z [verbose] multus-daemon started\\\\n2026-02-16T11:07:12Z [verbose] Readiness Indicator file check\\\\n2026-02-16T11:07:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.108473 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.129915 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.151143 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.154924 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.154997 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.155016 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.155039 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.155057 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:26Z","lastTransitionTime":"2026-02-16T11:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.182831 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:08:11Z\\\",\\\"message\\\":\\\"work=default : 12.093601ms\\\\nI0216 11:08:11.265553 6836 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0216 11:08:11.265982 6836 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0216 11:08:11.266439 6836 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 11:08:11.266478 6836 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 11:08:11.266515 6836 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 11:08:11.266514 6836 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 11:08:11.266538 6836 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 11:08:11.266552 6836 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 11:08:11.266561 6836 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 11:08:11.266631 6836 factory.go:656] Stopping watch factory\\\\nI0216 11:08:11.266639 6836 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 11:08:11.266660 6836 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 11:08:11.266662 6836 ovnkube.go:599] Stopped ovnkube\\\\nI0216 11:08:11.266683 6836 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 11:08:11.266695 6836 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 11:08:11.266770 6836 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:08:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.200100 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.216525 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.249548 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8691a3e-6cc8-4572-9944-0959966961df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5515231a0c89ca3dc95a5a7378dd7d8423a64cb385913c2896fff07d732f5577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3a42a006bd7e94f2d8bf0eb3497c6855085a7b46bc9b6160e2374b622093f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbc0d06905291749751153453b35b030114f2ace32e976e9df9b2146bb62fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf8efc14db2b408cd36560a7acc7da0745dd59512eb8a4844d76a406658106e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://037be71a565fee6fccf499bb13d62caa8649d7e7b509f68b998dd7180c85d6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfb8bf618f1da60c2e2200452e17bfbcaec2ee0a502c5e468dbe2a8216eaa0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb8bf618f1da60c2e2200452e17bfbcaec2ee0a502c5e468dbe2a8216eaa0ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe949bb7dd3abb04b1312984b3ba50a2ac5456997e75286042b76a674f33898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe949bb7dd3abb04b1312984b3ba50a2ac5456997e75286042b76a674f33898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ac5a93c9d4dac107e9798f3ea98b14180ce9ad38fa1048850568c176ab08832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5a93c9d4dac107e9798f3ea98b14180ce9ad38fa1048850568c176ab08832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.257930 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.257969 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.257983 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.258003 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.258018 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:26Z","lastTransitionTime":"2026-02-16T11:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.265634 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.281496 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.291414 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.302166 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.316722 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.340526 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.356079 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80704342-8cf6-432d-a729-c9ed85d25843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03ac68651e6f65e2295acfcc538003af7c162a7fb76761c3e28d3b15e1c0c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44a41fc51d7bbc1283bb9896ce89b415267374405ca087fc40fd8f80fbae4cc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44a41fc51d7bbc1283bb9896ce89b415267374405ca087fc40fd8f80fbae4cc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:26Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.361441 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.361515 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.361526 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.361546 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.361560 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:26Z","lastTransitionTime":"2026-02-16T11:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.491914 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.492011 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.492030 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.492056 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.492073 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:26Z","lastTransitionTime":"2026-02-16T11:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.596133 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.596191 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.596207 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.596228 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.596240 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:26Z","lastTransitionTime":"2026-02-16T11:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.698682 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.698731 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.698742 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.698758 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.698770 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:26Z","lastTransitionTime":"2026-02-16T11:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.801257 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.801321 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.801338 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.801361 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.801415 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:26Z","lastTransitionTime":"2026-02-16T11:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.904526 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.904616 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.904634 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.904659 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.904676 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:26Z","lastTransitionTime":"2026-02-16T11:08:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.982393 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.982478 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.982664 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:26 crc kubenswrapper[4797]: E0216 11:08:26.982666 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.982720 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:26 crc kubenswrapper[4797]: E0216 11:08:26.982925 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:26 crc kubenswrapper[4797]: E0216 11:08:26.983024 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:26 crc kubenswrapper[4797]: E0216 11:08:26.983405 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:26 crc kubenswrapper[4797]: I0216 11:08:26.991529 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 06:31:38.230599848 +0000 UTC Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.009216 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.009299 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.009323 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.009353 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.009374 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:27Z","lastTransitionTime":"2026-02-16T11:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.032820 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:27 crc kubenswrapper[4797]: E0216 11:08:27.032960 4797 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:08:27 crc kubenswrapper[4797]: E0216 11:08:27.033015 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs podName:1f19a4ae-a737-4818-82b5-db20cafd45c7 nodeName:}" failed. No retries permitted until 2026-02-16 11:09:31.032998793 +0000 UTC m=+165.753183773 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs") pod "network-metrics-daemon-cglwk" (UID: "1f19a4ae-a737-4818-82b5-db20cafd45c7") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.112001 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.112078 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.112095 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.112120 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.112138 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:27Z","lastTransitionTime":"2026-02-16T11:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.215814 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.215884 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.215908 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.215939 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.215961 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:27Z","lastTransitionTime":"2026-02-16T11:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.318261 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.318344 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.318365 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.318394 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.318417 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:27Z","lastTransitionTime":"2026-02-16T11:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.420389 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.420424 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.420435 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.420448 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.420458 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:27Z","lastTransitionTime":"2026-02-16T11:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.522999 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.523066 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.523083 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.523107 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.523123 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:27Z","lastTransitionTime":"2026-02-16T11:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.625511 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.625560 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.625596 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.625617 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.625629 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:27Z","lastTransitionTime":"2026-02-16T11:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.729178 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.729481 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.729630 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.729762 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.729883 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:27Z","lastTransitionTime":"2026-02-16T11:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.832976 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.833312 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.833413 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.833514 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.833642 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:27Z","lastTransitionTime":"2026-02-16T11:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.936767 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.936810 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.936822 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.936837 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.936848 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:27Z","lastTransitionTime":"2026-02-16T11:08:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:27 crc kubenswrapper[4797]: I0216 11:08:27.992249 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 04:43:34.842852844 +0000 UTC Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.039449 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.039492 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.039503 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.039520 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.039533 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:28Z","lastTransitionTime":"2026-02-16T11:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.142241 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.142304 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.142325 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.142355 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.142376 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:28Z","lastTransitionTime":"2026-02-16T11:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.244447 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.244489 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.244497 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.244509 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.244517 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:28Z","lastTransitionTime":"2026-02-16T11:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.346311 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.346350 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.346363 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.346378 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.346387 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:28Z","lastTransitionTime":"2026-02-16T11:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.448928 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.449043 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.449065 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.449103 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.449135 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:28Z","lastTransitionTime":"2026-02-16T11:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.551508 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.551566 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.551656 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.551692 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.551716 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:28Z","lastTransitionTime":"2026-02-16T11:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.654698 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.654750 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.654761 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.654783 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.654794 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:28Z","lastTransitionTime":"2026-02-16T11:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.757012 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.757072 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.757084 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.757102 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.757117 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:28Z","lastTransitionTime":"2026-02-16T11:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.859627 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.859671 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.859680 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.859694 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.859702 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:28Z","lastTransitionTime":"2026-02-16T11:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.962430 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.962508 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.962532 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.962571 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.962651 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:28Z","lastTransitionTime":"2026-02-16T11:08:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.982548 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.982644 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.982605 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.982646 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:28 crc kubenswrapper[4797]: E0216 11:08:28.982790 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:28 crc kubenswrapper[4797]: E0216 11:08:28.983006 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:28 crc kubenswrapper[4797]: E0216 11:08:28.983168 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:28 crc kubenswrapper[4797]: E0216 11:08:28.983367 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:28 crc kubenswrapper[4797]: I0216 11:08:28.993303 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 09:26:15.311246021 +0000 UTC Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.068556 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.068653 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.068677 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.068707 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.068731 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:29Z","lastTransitionTime":"2026-02-16T11:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.171014 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.171061 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.171077 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.171098 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.171114 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:29Z","lastTransitionTime":"2026-02-16T11:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.275036 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.275107 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.275140 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.275166 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.275184 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:29Z","lastTransitionTime":"2026-02-16T11:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.379552 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.379625 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.379640 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.379655 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.379666 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:29Z","lastTransitionTime":"2026-02-16T11:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.481080 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.481112 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.481120 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.481132 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.481141 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:29Z","lastTransitionTime":"2026-02-16T11:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.583847 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.583942 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.583980 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.584017 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.584039 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:29Z","lastTransitionTime":"2026-02-16T11:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.687038 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.687102 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.687120 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.687141 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.687155 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:29Z","lastTransitionTime":"2026-02-16T11:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.790294 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.790349 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.790366 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.790389 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.790406 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:29Z","lastTransitionTime":"2026-02-16T11:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.893326 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.893413 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.893440 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.893476 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.893500 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:29Z","lastTransitionTime":"2026-02-16T11:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.994035 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:20:16.942824698 +0000 UTC Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.995999 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.996062 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.996082 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.996108 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:29 crc kubenswrapper[4797]: I0216 11:08:29.996127 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:29Z","lastTransitionTime":"2026-02-16T11:08:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.098930 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.098992 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.099045 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.099434 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.099481 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:30Z","lastTransitionTime":"2026-02-16T11:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.202169 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.202228 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.202245 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.202268 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.202284 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:30Z","lastTransitionTime":"2026-02-16T11:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.304761 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.304838 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.304856 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.304884 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.304904 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:30Z","lastTransitionTime":"2026-02-16T11:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.407627 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.407695 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.407714 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.407741 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.407761 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:30Z","lastTransitionTime":"2026-02-16T11:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.511463 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.511541 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.511562 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.511627 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.511645 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:30Z","lastTransitionTime":"2026-02-16T11:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.614019 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.614049 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.614058 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.614073 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.614082 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:30Z","lastTransitionTime":"2026-02-16T11:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.716114 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.716154 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.716166 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.716182 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.716195 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:30Z","lastTransitionTime":"2026-02-16T11:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.818427 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.818494 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.818510 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.818535 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.818552 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:30Z","lastTransitionTime":"2026-02-16T11:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.921125 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.921252 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.921277 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.921308 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.921329 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:30Z","lastTransitionTime":"2026-02-16T11:08:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.982827 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.982884 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.982883 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.982951 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:30 crc kubenswrapper[4797]: E0216 11:08:30.983170 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:30 crc kubenswrapper[4797]: E0216 11:08:30.983270 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:30 crc kubenswrapper[4797]: E0216 11:08:30.983418 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:30 crc kubenswrapper[4797]: E0216 11:08:30.983548 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:30 crc kubenswrapper[4797]: I0216 11:08:30.994194 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:22:38.780867852 +0000 UTC Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.024280 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.024340 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.024355 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.024376 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.024388 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:31Z","lastTransitionTime":"2026-02-16T11:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.126658 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.126707 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.126716 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.126731 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.126741 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:31Z","lastTransitionTime":"2026-02-16T11:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.229889 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.229943 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.229960 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.229984 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.230001 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:31Z","lastTransitionTime":"2026-02-16T11:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.332350 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.332403 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.332417 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.332434 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.332446 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:31Z","lastTransitionTime":"2026-02-16T11:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.435865 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.435929 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.435964 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.436001 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.436021 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:31Z","lastTransitionTime":"2026-02-16T11:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.538847 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.538901 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.538923 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.538951 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.538973 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:31Z","lastTransitionTime":"2026-02-16T11:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.642529 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.642565 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.642573 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.642602 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.642613 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:31Z","lastTransitionTime":"2026-02-16T11:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.745623 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.745704 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.745726 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.745750 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.745768 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:31Z","lastTransitionTime":"2026-02-16T11:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.848849 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.848918 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.848938 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.848964 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.848980 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:31Z","lastTransitionTime":"2026-02-16T11:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.952302 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.952347 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.952357 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.952374 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.952388 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:31Z","lastTransitionTime":"2026-02-16T11:08:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:31 crc kubenswrapper[4797]: I0216 11:08:31.995275 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 14:24:05.84194004 +0000 UTC Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.054953 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.055002 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.055014 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.055032 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.055044 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:32Z","lastTransitionTime":"2026-02-16T11:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.157920 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.157966 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.157980 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.158002 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.158016 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:32Z","lastTransitionTime":"2026-02-16T11:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.260661 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.260715 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.260731 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.260754 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.260774 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:32Z","lastTransitionTime":"2026-02-16T11:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.365405 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.365467 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.365484 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.365510 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.365528 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:32Z","lastTransitionTime":"2026-02-16T11:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.469411 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.469465 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.469484 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.469506 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.469525 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:32Z","lastTransitionTime":"2026-02-16T11:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.572896 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.572957 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.572973 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.572997 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.573014 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:32Z","lastTransitionTime":"2026-02-16T11:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.675038 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.675107 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.675131 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.675159 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.675180 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:32Z","lastTransitionTime":"2026-02-16T11:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.777746 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.777804 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.777816 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.777830 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.777839 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:32Z","lastTransitionTime":"2026-02-16T11:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.881135 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.881255 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.881282 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.881310 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.881331 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:32Z","lastTransitionTime":"2026-02-16T11:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.981958 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.982055 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.982084 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.982504 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:32 crc kubenswrapper[4797]: E0216 11:08:32.982903 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:32 crc kubenswrapper[4797]: E0216 11:08:32.982990 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:32 crc kubenswrapper[4797]: E0216 11:08:32.983031 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:32 crc kubenswrapper[4797]: E0216 11:08:32.983101 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.984247 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.984344 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.984371 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.984407 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.984432 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:32Z","lastTransitionTime":"2026-02-16T11:08:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:32 crc kubenswrapper[4797]: I0216 11:08:32.995779 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:43:15.337162881 +0000 UTC Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.087968 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.088018 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.088028 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.088046 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.088057 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:33Z","lastTransitionTime":"2026-02-16T11:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.191425 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.191491 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.191501 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.191517 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.191529 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:33Z","lastTransitionTime":"2026-02-16T11:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.294814 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.294857 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.294866 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.294880 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.294889 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:33Z","lastTransitionTime":"2026-02-16T11:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.398094 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.398134 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.398146 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.398160 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.398169 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:33Z","lastTransitionTime":"2026-02-16T11:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.501766 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.501857 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.501881 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.501919 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.501944 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:33Z","lastTransitionTime":"2026-02-16T11:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.605136 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.605255 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.605280 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.605311 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.605331 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:33Z","lastTransitionTime":"2026-02-16T11:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.708393 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.708433 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.708445 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.708461 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.708475 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:33Z","lastTransitionTime":"2026-02-16T11:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.811811 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.811874 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.811890 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.811911 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.811926 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:33Z","lastTransitionTime":"2026-02-16T11:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.914428 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.914482 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.914494 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.914510 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.914520 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:33Z","lastTransitionTime":"2026-02-16T11:08:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:33 crc kubenswrapper[4797]: I0216 11:08:33.995894 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 08:15:23.711056335 +0000 UTC Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.017939 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.018012 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.018034 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.018060 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.018083 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:34Z","lastTransitionTime":"2026-02-16T11:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.120564 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.120644 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.120662 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.120681 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.120696 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:34Z","lastTransitionTime":"2026-02-16T11:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.223068 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.223179 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.223211 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.223241 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.223334 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:34Z","lastTransitionTime":"2026-02-16T11:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.326497 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.326569 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.326635 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.326670 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.326694 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:34Z","lastTransitionTime":"2026-02-16T11:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.429618 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.429698 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.429720 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.429749 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.429773 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:34Z","lastTransitionTime":"2026-02-16T11:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.532657 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.532779 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.532804 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.532831 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.532851 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:34Z","lastTransitionTime":"2026-02-16T11:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.635937 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.636001 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.636018 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.636044 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.636065 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:34Z","lastTransitionTime":"2026-02-16T11:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.739074 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.739163 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.739195 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.739235 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.739273 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:34Z","lastTransitionTime":"2026-02-16T11:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.842551 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.842642 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.842666 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.842697 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.842715 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:34Z","lastTransitionTime":"2026-02-16T11:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.945676 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.945775 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.945795 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.945821 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.945838 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:34Z","lastTransitionTime":"2026-02-16T11:08:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.982285 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.982364 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.982418 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:34 crc kubenswrapper[4797]: E0216 11:08:34.982500 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.982672 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:34 crc kubenswrapper[4797]: E0216 11:08:34.982885 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:34 crc kubenswrapper[4797]: E0216 11:08:34.983015 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:34 crc kubenswrapper[4797]: E0216 11:08:34.983136 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:34 crc kubenswrapper[4797]: I0216 11:08:34.996474 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 14:49:30.670028143 +0000 UTC Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.049358 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.049437 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.049456 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.049490 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.049518 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.152206 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.152273 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.152290 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.152315 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.152333 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.255974 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.256040 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.256060 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.256085 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.256102 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.359342 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.359431 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.359455 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.359489 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.359513 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.462299 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.462342 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.462352 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.462367 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.462379 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.565771 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.565846 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.565871 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.565900 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.565926 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.668756 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.668808 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.668827 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.668848 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.668865 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.771288 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.771336 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.771352 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.771370 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.771386 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.874975 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.875092 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.875154 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.875192 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.875257 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.880893 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.880926 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.880958 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.880973 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.880983 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: E0216 11:08:35.892609 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:35Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.896334 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.896365 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.896375 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.896389 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.896399 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: E0216 11:08:35.951868 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:35Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.955064 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.955104 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.955115 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.955132 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.955145 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: E0216 11:08:35.968896 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:35Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.972676 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.972719 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.972731 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.972749 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.972762 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: E0216 11:08:35.983958 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:35Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.987381 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.987422 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.987433 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.987447 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.987458 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.993746 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffc7a6ce-5bfa-4d2f-9ee8-9aba721036a2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f0d36ef1e81ae5af530f1fe01e10660e05c836b4c3eb7a4d74fc6de8d4440be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://21442b582407535d33311d2a9117cfe7b528510738f5cb295eb5ad23118544ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38abf23a7edba74a8e792559230e2475becf1fc09721e383b9d7694d83adb065\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00a82c625468af05eec97af48354ec5d5f96b6b4240554486ebd5b29f110e4b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:35Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.997046 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:02:10.892919857 +0000 UTC Feb 16 11:08:35 crc kubenswrapper[4797]: E0216 11:08:35.997803 4797 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T11:08:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"fbba5025-2e12-492d-9c5c-fa0555b0b84a\\\",\\\"systemUUID\\\":\\\"599a276a-da76-4549-96c4-dbb5c7e37426\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:35Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:35 crc kubenswrapper[4797]: E0216 11:08:35.997978 4797 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.999251 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.999282 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.999290 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.999304 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:35 crc kubenswrapper[4797]: I0216 11:08:35.999314 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:35Z","lastTransitionTime":"2026-02-16T11:08:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.006308 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac7fc57b-ad0c-4b7c-b65c-6f930a3d66ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://271e99c566b83153c13eae8b879f82b23dd9ad7d5d125ffeff2e4d7588dceb1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d44c9ff01fb45495e6eb72d9975ea6c7fdca32e9339776724c562be9f90e215\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fxq6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:21Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-vnjnm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.021083 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"705d9f4b-2610-4bce-8adf-a80a8c630c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T11:07:07Z\\\",\\\"message\\\":\\\"1.579808 1 observer_polling.go:159] Starting file observer\\\\nW0216 11:07:01.583788 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 11:07:01.584023 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 11:07:01.585129 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2439149610/tls.crt::/tmp/serving-cert-2439149610/tls.key\\\\\\\"\\\\nI0216 11:07:07.342271 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 11:07:07.388290 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 11:07:07.388327 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 11:07:07.388357 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 11:07:07.388402 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 11:07:07.396723 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 11:07:07.396760 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396767 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 11:07:07.396772 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 11:07:07.396777 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 11:07:07.396781 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 11:07:07.396785 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 11:07:07.396934 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.034786 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5e123dc6ffb0820f9143b0c89ca189ca533457b0abe58078f065ea9b17303e59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.047826 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://28891f4e5e4223b3e6a27a07df1a9b7f73d77cc47ab50e8d74835ac43039ad05\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d91f83049f86652adaf240f3bd545f1f00c36ff4f7c172cec5a2385958dd1e6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.060439 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.079145 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-5qvbt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9532a098-7e41-454c-af48-44f9a9478d12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add78f37ddde7d8aaedb5783128c8f7f19f74ffe6ab10f54c85be98d5ec3bcbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:07:57Z\\\",\\\"message\\\":\\\"2026-02-16T11:07:12+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a\\\\n2026-02-16T11:07:12+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_242f238b-c653-4abc-bf6e-822b2eed2e0a to /host/opt/cni/bin/\\\\n2026-02-16T11:07:12Z [verbose] multus-daemon started\\\\n2026-02-16T11:07:12Z [verbose] Readiness Indicator file check\\\\n2026-02-16T11:07:57Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rszb5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5qvbt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.099635 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"377bb3bb-1c3d-4cc5-a159-2d116f464492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://32f7f2e7f4f84d28c732f0f519230b7846d2ee89acb239b075fdea8158022f67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0f52e05677633d44f610a5357c34f89a683320c01834fb4dc5c367a832d000a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6008c527c7fe4fa09299dd27dfd73fe354febd10d57756fcdde44a67c92ccaf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e6472734bdd306cf059af8c274647643f65c867273c0199c2fa719f02a41028\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e083c2c1e1dcf5f597511fa8d16d1d839dcc928ba6a1bbd054e95042d16d7eba\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://80d376ff2f73efef03f6c5211736ffd06e58effec2fdb338f0f8c3cea065269c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3784b10ee94d6f93ae2ca3a2a6d08da9ab4b95b7a130c49c379ad724e59aad91\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg9cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8h8ld\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.102288 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.102317 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.102342 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.102357 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.102367 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:36Z","lastTransitionTime":"2026-02-16T11:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.114255 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8fa0761824174ee9552426bd4ea5617d75f9f498a6bd9b050480855f582e0999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.126982 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"128f4e85-fd17-4281-97d2-872fda792b21\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb13bbefa020a3de5b413013ae414b7a605ba456baf291626bdcdfe9b7364a52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-59p29\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-lkgrl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.155535 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"812f1f08-469d-44f4-907e-60ad61837364\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T11:08:11Z\\\",\\\"message\\\":\\\"work=default : 12.093601ms\\\\nI0216 11:08:11.265553 6836 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0216 11:08:11.265982 6836 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0216 11:08:11.266439 6836 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 11:08:11.266478 6836 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 11:08:11.266515 6836 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 11:08:11.266514 6836 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 11:08:11.266538 6836 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 11:08:11.266552 6836 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 11:08:11.266561 6836 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 11:08:11.266631 6836 factory.go:656] Stopping watch factory\\\\nI0216 11:08:11.266639 6836 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 11:08:11.266660 6836 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 11:08:11.266662 6836 ovnkube.go:599] Stopped ovnkube\\\\nI0216 11:08:11.266683 6836 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 11:08:11.266695 6836 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0216 11:08:11.266770 6836 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T11:08:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:07:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mv4sj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h9hsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.172677 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e505cc2-6e37-4603-bd70-4c182eea4266\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f57f179d0f0c2ef7691c610bc2ceaa1ae7fcdf939e4bd39b23e027220332953\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9def53290a465b5198a4788079ad7238399fdce896ad1940061a8da0b6fb6347\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6706348144f196874dcb9196fc12255bee00be9299309a5f9a0653cb802f14d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.186066 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80704342-8cf6-432d-a729-c9ed85d25843\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e03ac68651e6f65e2295acfcc538003af7c162a7fb76761c3e28d3b15e1c0c13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://44a41fc51d7bbc1283bb9896ce89b415267374405ca087fc40fd8f80fbae4cc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://44a41fc51d7bbc1283bb9896ce89b415267374405ca087fc40fd8f80fbae4cc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.205157 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.205205 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.205216 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.205234 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.205247 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:36Z","lastTransitionTime":"2026-02-16T11:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.218527 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8691a3e-6cc8-4572-9944-0959966961df\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:06:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5515231a0c89ca3dc95a5a7378dd7d8423a64cb385913c2896fff07d732f5577\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8b3a42a006bd7e94f2d8bf0eb3497c6855085a7b46bc9b6160e2374b622093f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afbc0d06905291749751153453b35b030114f2ace32e976e9df9b2146bb62fe9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf8efc14db2b408cd36560a7acc7da0745dd59512eb8a4844d76a406658106e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://037be71a565fee6fccf499bb13d62caa8649d7e7b509f68b998dd7180c85d6b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:06:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bfb8bf618f1da60c2e2200452e17bfbcaec2ee0a502c5e468dbe2a8216eaa0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bfb8bf618f1da60c2e2200452e17bfbcaec2ee0a502c5e468dbe2a8216eaa0ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1fe949bb7dd3abb04b1312984b3ba50a2ac5456997e75286042b76a674f33898\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1fe949bb7dd3abb04b1312984b3ba50a2ac5456997e75286042b76a674f33898\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:48Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1ac5a93c9d4dac107e9798f3ea98b14180ce9ad38fa1048850568c176ab08832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ac5a93c9d4dac107e9798f3ea98b14180ce9ad38fa1048850568c176ab08832\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T11:06:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T11:06:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:06:46Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.234089 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.249001 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.263115 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-rd6dh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e28dd15-03ea-4c9f-94d0-7b953d0c4044\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bd6b0946f5927c7746ffc36f88d75eb1e70562cf1d598d4bb9749147590740d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xtl6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-rd6dh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.277390 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-77slb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1b86971c-f0fb-492a-ade1-9535933f5d2b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2da31fb260e7bc061dd05766d91c63409658f202570621aee4907b203ac5a08c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T11:07:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-789z9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-77slb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.290711 4797 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-cglwk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f19a4ae-a737-4818-82b5-db20cafd45c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T11:07:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2g9vs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T11:07:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-cglwk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T11:08:36Z is after 2025-08-24T17:21:41Z" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.308642 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.308757 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.308777 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.308804 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.308826 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:36Z","lastTransitionTime":"2026-02-16T11:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.410936 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.410996 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.411014 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.411290 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.411314 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:36Z","lastTransitionTime":"2026-02-16T11:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.513673 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.513748 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.513772 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.513800 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.513822 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:36Z","lastTransitionTime":"2026-02-16T11:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.616654 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.616710 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.616727 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.616750 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.616766 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:36Z","lastTransitionTime":"2026-02-16T11:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.718783 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.718893 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.718917 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.718948 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.718969 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:36Z","lastTransitionTime":"2026-02-16T11:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.822529 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.822654 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.822674 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.822696 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.822712 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:36Z","lastTransitionTime":"2026-02-16T11:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.925930 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.925978 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.925997 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.926020 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.926036 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:36Z","lastTransitionTime":"2026-02-16T11:08:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.982029 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:36 crc kubenswrapper[4797]: E0216 11:08:36.982226 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.982523 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.982644 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:36 crc kubenswrapper[4797]: E0216 11:08:36.982711 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.982545 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:36 crc kubenswrapper[4797]: E0216 11:08:36.982814 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:36 crc kubenswrapper[4797]: E0216 11:08:36.982993 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:36 crc kubenswrapper[4797]: I0216 11:08:36.997919 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 18:02:52.378927244 +0000 UTC Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.029684 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.029744 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.029759 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.029781 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.029796 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:37Z","lastTransitionTime":"2026-02-16T11:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.133118 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.133190 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.133210 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.133229 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.133244 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:37Z","lastTransitionTime":"2026-02-16T11:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.236886 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.236968 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.236982 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.237001 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.237013 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:37Z","lastTransitionTime":"2026-02-16T11:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.339637 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.339714 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.339733 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.339759 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.339780 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:37Z","lastTransitionTime":"2026-02-16T11:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.442157 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.442224 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.442241 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.442265 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.442285 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:37Z","lastTransitionTime":"2026-02-16T11:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.545025 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.545103 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.545131 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.545156 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.545175 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:37Z","lastTransitionTime":"2026-02-16T11:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.649880 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.649953 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.649967 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.649990 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.650006 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:37Z","lastTransitionTime":"2026-02-16T11:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.752327 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.752372 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.752383 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.752401 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.752411 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:37Z","lastTransitionTime":"2026-02-16T11:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.854569 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.854723 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.854748 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.854781 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.854805 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:37Z","lastTransitionTime":"2026-02-16T11:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.957324 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.957363 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.957391 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.957405 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.957414 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:37Z","lastTransitionTime":"2026-02-16T11:08:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.984397 4797 scope.go:117] "RemoveContainer" containerID="092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b" Feb 16 11:08:37 crc kubenswrapper[4797]: E0216 11:08:37.984923 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-h9hsp_openshift-ovn-kubernetes(812f1f08-469d-44f4-907e-60ad61837364)\"" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" Feb 16 11:08:37 crc kubenswrapper[4797]: I0216 11:08:37.998983 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:17:12.966795401 +0000 UTC Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.060307 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.060572 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.060660 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.060696 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.060721 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:38Z","lastTransitionTime":"2026-02-16T11:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.164355 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.164409 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.164426 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.164450 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.164469 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:38Z","lastTransitionTime":"2026-02-16T11:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.267667 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.267753 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.267779 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.267810 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.267832 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:38Z","lastTransitionTime":"2026-02-16T11:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.370326 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.370392 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.370418 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.370451 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.370474 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:38Z","lastTransitionTime":"2026-02-16T11:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.472044 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.472084 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.472093 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.472110 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.472119 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:38Z","lastTransitionTime":"2026-02-16T11:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.574675 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.574732 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.574750 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.574773 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.574790 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:38Z","lastTransitionTime":"2026-02-16T11:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.677128 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.677198 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.677211 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.677228 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.677263 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:38Z","lastTransitionTime":"2026-02-16T11:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.779676 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.779720 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.779732 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.779749 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.779760 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:38Z","lastTransitionTime":"2026-02-16T11:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.882823 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.882868 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.882877 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.882893 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.882904 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:38Z","lastTransitionTime":"2026-02-16T11:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.982658 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.983118 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.983309 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:38 crc kubenswrapper[4797]: E0216 11:08:38.983220 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:38 crc kubenswrapper[4797]: E0216 11:08:38.983534 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.983853 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:38 crc kubenswrapper[4797]: E0216 11:08:38.984088 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:38 crc kubenswrapper[4797]: E0216 11:08:38.984484 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.985970 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.986037 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.986051 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.986066 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.986113 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:38Z","lastTransitionTime":"2026-02-16T11:08:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:38 crc kubenswrapper[4797]: I0216 11:08:38.999387 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:27:05.270353787 +0000 UTC Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.089218 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.089266 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.089280 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.089298 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.089309 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:39Z","lastTransitionTime":"2026-02-16T11:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.191318 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.191355 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.191363 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.191378 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.191389 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:39Z","lastTransitionTime":"2026-02-16T11:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.294013 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.294069 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.294087 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.294115 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.294136 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:39Z","lastTransitionTime":"2026-02-16T11:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.397367 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.397404 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.397416 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.397432 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.397443 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:39Z","lastTransitionTime":"2026-02-16T11:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.500483 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.500567 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.500622 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.500646 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.500662 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:39Z","lastTransitionTime":"2026-02-16T11:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.603987 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.604308 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.604417 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.604527 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.604778 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:39Z","lastTransitionTime":"2026-02-16T11:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.707832 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.707868 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.707879 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.707896 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.707911 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:39Z","lastTransitionTime":"2026-02-16T11:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.810734 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.810771 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.810779 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.810792 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.810801 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:39Z","lastTransitionTime":"2026-02-16T11:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.913767 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.913848 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.913865 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.913890 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:39 crc kubenswrapper[4797]: I0216 11:08:39.913906 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:39Z","lastTransitionTime":"2026-02-16T11:08:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:39.999921 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 01:14:10.5791635 +0000 UTC Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.017217 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.017268 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.017287 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.017309 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.017326 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:40Z","lastTransitionTime":"2026-02-16T11:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.119558 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.119676 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.119700 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.119728 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.119749 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:40Z","lastTransitionTime":"2026-02-16T11:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.222107 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.222142 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.222150 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.222163 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.222172 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:40Z","lastTransitionTime":"2026-02-16T11:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.324229 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.324298 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.324320 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.324348 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.324371 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:40Z","lastTransitionTime":"2026-02-16T11:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.427126 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.427247 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.427265 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.427327 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.427346 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:40Z","lastTransitionTime":"2026-02-16T11:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.529965 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.530005 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.530013 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.530026 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.530036 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:40Z","lastTransitionTime":"2026-02-16T11:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.632931 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.632976 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.632989 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.633006 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.633016 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:40Z","lastTransitionTime":"2026-02-16T11:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.735024 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.735058 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.735067 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.735096 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.735105 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:40Z","lastTransitionTime":"2026-02-16T11:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.837112 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.837151 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.837173 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.837205 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.837228 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:40Z","lastTransitionTime":"2026-02-16T11:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.939680 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.939751 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.939769 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.939793 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.939810 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:40Z","lastTransitionTime":"2026-02-16T11:08:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.982409 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.982428 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.982487 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:40 crc kubenswrapper[4797]: E0216 11:08:40.983167 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:40 crc kubenswrapper[4797]: E0216 11:08:40.982987 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:40 crc kubenswrapper[4797]: I0216 11:08:40.982482 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:40 crc kubenswrapper[4797]: E0216 11:08:40.983397 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:40 crc kubenswrapper[4797]: E0216 11:08:40.983248 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.000281 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:35:13.687184301 +0000 UTC Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.042399 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.042439 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.042449 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.042463 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.042474 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:41Z","lastTransitionTime":"2026-02-16T11:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.144159 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.144429 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.144492 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.144560 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.144641 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:41Z","lastTransitionTime":"2026-02-16T11:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.247807 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.247862 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.247879 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.247904 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.247922 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:41Z","lastTransitionTime":"2026-02-16T11:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.350789 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.350882 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.350903 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.350930 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.350954 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:41Z","lastTransitionTime":"2026-02-16T11:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.455146 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.455207 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.455227 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.455256 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.455279 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:41Z","lastTransitionTime":"2026-02-16T11:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.558967 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.559014 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.559030 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.559235 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.559280 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:41Z","lastTransitionTime":"2026-02-16T11:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.661782 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.661823 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.661833 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.661848 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.661859 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:41Z","lastTransitionTime":"2026-02-16T11:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.764143 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.764177 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.764185 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.764197 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.764205 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:41Z","lastTransitionTime":"2026-02-16T11:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.867201 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.867251 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.867262 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.867278 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.867291 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:41Z","lastTransitionTime":"2026-02-16T11:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.970047 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.970087 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.970097 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.970109 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:41 crc kubenswrapper[4797]: I0216 11:08:41.970119 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:41Z","lastTransitionTime":"2026-02-16T11:08:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.001142 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:50:15.555735289 +0000 UTC Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.073087 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.073160 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.073180 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.073202 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.073221 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:42Z","lastTransitionTime":"2026-02-16T11:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.176568 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.176677 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.176694 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.176718 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.176737 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:42Z","lastTransitionTime":"2026-02-16T11:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.279531 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.279617 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.279631 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.279649 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.279662 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:42Z","lastTransitionTime":"2026-02-16T11:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.382878 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.383668 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.383688 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.383713 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.383723 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:42Z","lastTransitionTime":"2026-02-16T11:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.485422 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.485492 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.485501 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.485735 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.485748 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:42Z","lastTransitionTime":"2026-02-16T11:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.588477 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.588519 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.588527 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.588546 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.588563 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:42Z","lastTransitionTime":"2026-02-16T11:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.690830 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.690877 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.690895 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.690916 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.690932 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:42Z","lastTransitionTime":"2026-02-16T11:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.792738 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.792793 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.792816 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.792843 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.792863 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:42Z","lastTransitionTime":"2026-02-16T11:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.895375 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.895467 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.895484 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.895506 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.895524 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:42Z","lastTransitionTime":"2026-02-16T11:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.981849 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.981897 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:42 crc kubenswrapper[4797]: E0216 11:08:42.982307 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.981990 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:42 crc kubenswrapper[4797]: E0216 11:08:42.982955 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.981956 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:42 crc kubenswrapper[4797]: E0216 11:08:42.982492 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:42 crc kubenswrapper[4797]: E0216 11:08:42.983302 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.998220 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.998272 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.998293 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.998327 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:42 crc kubenswrapper[4797]: I0216 11:08:42.998344 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:42Z","lastTransitionTime":"2026-02-16T11:08:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.001464 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:06:55.8003138 +0000 UTC Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.101476 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.101838 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.101930 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.102067 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.102167 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:43Z","lastTransitionTime":"2026-02-16T11:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.205440 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.205467 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.205475 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.205489 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.205499 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:43Z","lastTransitionTime":"2026-02-16T11:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.308097 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.308370 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.308448 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.308526 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.308613 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:43Z","lastTransitionTime":"2026-02-16T11:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.410765 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.410798 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.410806 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.410823 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.410835 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:43Z","lastTransitionTime":"2026-02-16T11:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.512941 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.513141 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.513228 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.513299 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.513373 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:43Z","lastTransitionTime":"2026-02-16T11:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.617242 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.617292 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.617304 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.617320 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.617333 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:43Z","lastTransitionTime":"2026-02-16T11:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.697536 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5qvbt_9532a098-7e41-454c-af48-44f9a9478d12/kube-multus/1.log" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.698287 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5qvbt_9532a098-7e41-454c-af48-44f9a9478d12/kube-multus/0.log" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.698413 4797 generic.go:334] "Generic (PLEG): container finished" podID="9532a098-7e41-454c-af48-44f9a9478d12" containerID="add78f37ddde7d8aaedb5783128c8f7f19f74ffe6ab10f54c85be98d5ec3bcbc" exitCode=1 Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.698509 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5qvbt" event={"ID":"9532a098-7e41-454c-af48-44f9a9478d12","Type":"ContainerDied","Data":"add78f37ddde7d8aaedb5783128c8f7f19f74ffe6ab10f54c85be98d5ec3bcbc"} Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.698570 4797 scope.go:117] "RemoveContainer" containerID="c6b0622a4a82b8a4b9b7c66a930ed9246a672abd3a08bff9142dd2c812b121c5" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.698984 4797 scope.go:117] "RemoveContainer" containerID="add78f37ddde7d8aaedb5783128c8f7f19f74ffe6ab10f54c85be98d5ec3bcbc" Feb 16 11:08:43 crc kubenswrapper[4797]: E0216 11:08:43.699131 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-5qvbt_openshift-multus(9532a098-7e41-454c-af48-44f9a9478d12)\"" pod="openshift-multus/multus-5qvbt" podUID="9532a098-7e41-454c-af48-44f9a9478d12" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.720378 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.720793 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.720807 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.720824 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.720838 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:43Z","lastTransitionTime":"2026-02-16T11:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.752187 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=27.752166132 podStartE2EDuration="27.752166132s" podCreationTimestamp="2026-02-16 11:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:08:43.739534215 +0000 UTC m=+118.459719215" watchObservedRunningTime="2026-02-16 11:08:43.752166132 +0000 UTC m=+118.472351122" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.798714 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-rd6dh" podStartSLOduration=96.798690124 podStartE2EDuration="1m36.798690124s" podCreationTimestamp="2026-02-16 11:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:08:43.781405546 +0000 UTC m=+118.501590536" watchObservedRunningTime="2026-02-16 11:08:43.798690124 +0000 UTC m=+118.518875134" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.798917 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-77slb" podStartSLOduration=96.79890904 podStartE2EDuration="1m36.79890904s" podCreationTimestamp="2026-02-16 11:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:08:43.797952554 +0000 UTC m=+118.518137564" watchObservedRunningTime="2026-02-16 11:08:43.79890904 +0000 UTC m=+118.519094050" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.823417 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.823455 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.823463 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.823477 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.823486 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:43Z","lastTransitionTime":"2026-02-16T11:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.831506 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=97.83148864 podStartE2EDuration="1m37.83148864s" podCreationTimestamp="2026-02-16 11:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:08:43.831085029 +0000 UTC m=+118.551270019" watchObservedRunningTime="2026-02-16 11:08:43.83148864 +0000 UTC m=+118.551673620" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.856188 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=66.856158356 podStartE2EDuration="1m6.856158356s" podCreationTimestamp="2026-02-16 11:07:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:08:43.855139627 +0000 UTC m=+118.575324647" watchObservedRunningTime="2026-02-16 11:08:43.856158356 +0000 UTC m=+118.576343336" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.856477 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=35.856469585 podStartE2EDuration="35.856469585s" podCreationTimestamp="2026-02-16 11:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:08:43.8424621 +0000 UTC m=+118.562647120" watchObservedRunningTime="2026-02-16 11:08:43.856469585 +0000 UTC m=+118.576654575" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.868980 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-vnjnm" podStartSLOduration=95.868957767 podStartE2EDuration="1m35.868957767s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:08:43.867525007 +0000 UTC m=+118.587709987" watchObservedRunningTime="2026-02-16 11:08:43.868957767 +0000 UTC m=+118.589142767" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.926499 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.926537 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.926545 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.926560 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.926569 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:43Z","lastTransitionTime":"2026-02-16T11:08:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.959207 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=95.959177493 podStartE2EDuration="1m35.959177493s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:08:43.958383951 +0000 UTC m=+118.678568951" watchObservedRunningTime="2026-02-16 11:08:43.959177493 +0000 UTC m=+118.679362513" Feb 16 11:08:43 crc kubenswrapper[4797]: I0216 11:08:43.960038 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8h8ld" podStartSLOduration=95.960022548 podStartE2EDuration="1m35.960022548s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:08:43.931015048 +0000 UTC m=+118.651200028" watchObservedRunningTime="2026-02-16 11:08:43.960022548 +0000 UTC m=+118.680207578" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.001939 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:40:43.257380252 +0000 UTC Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.028515 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.028627 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.028655 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.028682 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.028702 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:44Z","lastTransitionTime":"2026-02-16T11:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.130672 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.130711 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.130724 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.130739 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.130750 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:44Z","lastTransitionTime":"2026-02-16T11:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.232902 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.232952 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.232964 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.232982 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.232995 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:44Z","lastTransitionTime":"2026-02-16T11:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.335750 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.335810 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.335825 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.335843 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.335855 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:44Z","lastTransitionTime":"2026-02-16T11:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.439061 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.439137 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.439155 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.439181 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.439200 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:44Z","lastTransitionTime":"2026-02-16T11:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.542549 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.542621 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.542632 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.542652 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.542663 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:44Z","lastTransitionTime":"2026-02-16T11:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.646398 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.646454 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.646467 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.646487 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.646499 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:44Z","lastTransitionTime":"2026-02-16T11:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.705315 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5qvbt_9532a098-7e41-454c-af48-44f9a9478d12/kube-multus/1.log" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.749334 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.749416 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.749438 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.749471 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.749492 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:44Z","lastTransitionTime":"2026-02-16T11:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.852612 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.852959 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.853037 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.853129 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.853219 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:44Z","lastTransitionTime":"2026-02-16T11:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.956676 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.956906 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.956966 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.957064 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.957128 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:44Z","lastTransitionTime":"2026-02-16T11:08:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.982177 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.982220 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:44 crc kubenswrapper[4797]: E0216 11:08:44.982347 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.982357 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:44 crc kubenswrapper[4797]: E0216 11:08:44.982456 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:44 crc kubenswrapper[4797]: E0216 11:08:44.982716 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:44 crc kubenswrapper[4797]: I0216 11:08:44.982748 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:44 crc kubenswrapper[4797]: E0216 11:08:44.983091 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.002550 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 02:21:14.998552937 +0000 UTC Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.061608 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.061682 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.061696 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.061716 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.061732 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:45Z","lastTransitionTime":"2026-02-16T11:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.167431 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.167495 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.167507 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.167525 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.167539 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:45Z","lastTransitionTime":"2026-02-16T11:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.271057 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.271109 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.271121 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.271149 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.271163 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:45Z","lastTransitionTime":"2026-02-16T11:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.374687 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.374752 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.374765 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.374787 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.374808 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:45Z","lastTransitionTime":"2026-02-16T11:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.477971 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.478030 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.478049 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.478122 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.478150 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:45Z","lastTransitionTime":"2026-02-16T11:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.581743 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.581808 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.581828 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.581853 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.581871 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:45Z","lastTransitionTime":"2026-02-16T11:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.685493 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.685611 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.685637 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.685668 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.685694 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:45Z","lastTransitionTime":"2026-02-16T11:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.789189 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.789249 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.789269 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.789312 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.789343 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:45Z","lastTransitionTime":"2026-02-16T11:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.892242 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.892296 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.892310 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.892328 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:45 crc kubenswrapper[4797]: I0216 11:08:45.892341 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:45Z","lastTransitionTime":"2026-02-16T11:08:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:45 crc kubenswrapper[4797]: E0216 11:08:45.992883 4797 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.003666 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 00:19:08.724185358 +0000 UTC Feb 16 11:08:46 crc kubenswrapper[4797]: E0216 11:08:46.072887 4797 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.341244 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.341422 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.341440 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.341469 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.341487 4797 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T11:08:46Z","lastTransitionTime":"2026-02-16T11:08:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.412399 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podStartSLOduration=98.412366766 podStartE2EDuration="1m38.412366766s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:08:44.038181473 +0000 UTC m=+118.758366453" watchObservedRunningTime="2026-02-16 11:08:46.412366766 +0000 UTC m=+121.132551786" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.414485 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q"] Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.415238 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.418399 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.419270 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.419915 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.420648 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.458478 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48454378-f53d-4486-86e2-d62e8b28ab75-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.458563 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/48454378-f53d-4486-86e2-d62e8b28ab75-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.458800 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/48454378-f53d-4486-86e2-d62e8b28ab75-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.458854 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48454378-f53d-4486-86e2-d62e8b28ab75-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.458900 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/48454378-f53d-4486-86e2-d62e8b28ab75-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.559914 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/48454378-f53d-4486-86e2-d62e8b28ab75-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.560032 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/48454378-f53d-4486-86e2-d62e8b28ab75-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.560033 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48454378-f53d-4486-86e2-d62e8b28ab75-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.560169 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/48454378-f53d-4486-86e2-d62e8b28ab75-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.560237 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/48454378-f53d-4486-86e2-d62e8b28ab75-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.560254 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48454378-f53d-4486-86e2-d62e8b28ab75-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.560605 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/48454378-f53d-4486-86e2-d62e8b28ab75-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.562420 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/48454378-f53d-4486-86e2-d62e8b28ab75-service-ca\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.568089 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48454378-f53d-4486-86e2-d62e8b28ab75-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.578626 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/48454378-f53d-4486-86e2-d62e8b28ab75-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-cgg2q\" (UID: \"48454378-f53d-4486-86e2-d62e8b28ab75\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.743053 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.982134 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.982182 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.982214 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:46 crc kubenswrapper[4797]: E0216 11:08:46.982252 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:46 crc kubenswrapper[4797]: I0216 11:08:46.982153 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:46 crc kubenswrapper[4797]: E0216 11:08:46.982368 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:46 crc kubenswrapper[4797]: E0216 11:08:46.982490 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:46 crc kubenswrapper[4797]: E0216 11:08:46.982558 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:47 crc kubenswrapper[4797]: I0216 11:08:47.004374 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 08:19:09.535542532 +0000 UTC Feb 16 11:08:47 crc kubenswrapper[4797]: I0216 11:08:47.004476 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 11:08:47 crc kubenswrapper[4797]: I0216 11:08:47.012878 4797 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 11:08:47 crc kubenswrapper[4797]: I0216 11:08:47.720255 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" event={"ID":"48454378-f53d-4486-86e2-d62e8b28ab75","Type":"ContainerStarted","Data":"1bf8dd199bd7f581885169954ee45ffa46c4d02d30b6989bc937a62f624787eb"} Feb 16 11:08:47 crc kubenswrapper[4797]: I0216 11:08:47.720354 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" event={"ID":"48454378-f53d-4486-86e2-d62e8b28ab75","Type":"ContainerStarted","Data":"890a8fc94c77f799272c62fe5fd7e38b9cbe8ce3157a70866749d59faaaa6ef9"} Feb 16 11:08:47 crc kubenswrapper[4797]: I0216 11:08:47.742387 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-cgg2q" podStartSLOduration=99.74235848000001 podStartE2EDuration="1m39.74235848s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:08:47.738165582 +0000 UTC m=+122.458350602" watchObservedRunningTime="2026-02-16 11:08:47.74235848 +0000 UTC m=+122.462543530" Feb 16 11:08:48 crc kubenswrapper[4797]: I0216 11:08:48.982020 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:48 crc kubenswrapper[4797]: I0216 11:08:48.982059 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:48 crc kubenswrapper[4797]: E0216 11:08:48.982167 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:48 crc kubenswrapper[4797]: I0216 11:08:48.982212 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:48 crc kubenswrapper[4797]: I0216 11:08:48.982070 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:48 crc kubenswrapper[4797]: E0216 11:08:48.982389 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:48 crc kubenswrapper[4797]: E0216 11:08:48.982451 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:48 crc kubenswrapper[4797]: E0216 11:08:48.982528 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:50 crc kubenswrapper[4797]: I0216 11:08:50.981974 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:50 crc kubenswrapper[4797]: I0216 11:08:50.982010 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:50 crc kubenswrapper[4797]: I0216 11:08:50.982103 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:50 crc kubenswrapper[4797]: E0216 11:08:50.982272 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:50 crc kubenswrapper[4797]: I0216 11:08:50.982353 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:50 crc kubenswrapper[4797]: E0216 11:08:50.982419 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:50 crc kubenswrapper[4797]: E0216 11:08:50.982619 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:50 crc kubenswrapper[4797]: E0216 11:08:50.983135 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:51 crc kubenswrapper[4797]: E0216 11:08:51.074634 4797 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:08:52 crc kubenswrapper[4797]: I0216 11:08:52.982201 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:52 crc kubenswrapper[4797]: I0216 11:08:52.982235 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:52 crc kubenswrapper[4797]: I0216 11:08:52.982238 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:52 crc kubenswrapper[4797]: I0216 11:08:52.982317 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:52 crc kubenswrapper[4797]: E0216 11:08:52.982747 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:52 crc kubenswrapper[4797]: E0216 11:08:52.982822 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:52 crc kubenswrapper[4797]: E0216 11:08:52.982921 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:52 crc kubenswrapper[4797]: E0216 11:08:52.982978 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:52 crc kubenswrapper[4797]: I0216 11:08:52.983176 4797 scope.go:117] "RemoveContainer" containerID="092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b" Feb 16 11:08:53 crc kubenswrapper[4797]: I0216 11:08:53.748552 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/3.log" Feb 16 11:08:53 crc kubenswrapper[4797]: I0216 11:08:53.752794 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerStarted","Data":"9b639213eee10103d5dd443502e1ef8136381ee923d36ae9608b41bc0a1b2954"} Feb 16 11:08:53 crc kubenswrapper[4797]: I0216 11:08:53.753487 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:08:53 crc kubenswrapper[4797]: I0216 11:08:53.792319 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podStartSLOduration=105.792287089 podStartE2EDuration="1m45.792287089s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:08:53.791056024 +0000 UTC m=+128.511241024" watchObservedRunningTime="2026-02-16 11:08:53.792287089 +0000 UTC m=+128.512472139" Feb 16 11:08:53 crc kubenswrapper[4797]: I0216 11:08:53.846621 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cglwk"] Feb 16 11:08:53 crc kubenswrapper[4797]: I0216 11:08:53.846786 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:53 crc kubenswrapper[4797]: E0216 11:08:53.846962 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:54 crc kubenswrapper[4797]: I0216 11:08:54.982255 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:54 crc kubenswrapper[4797]: I0216 11:08:54.982336 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:54 crc kubenswrapper[4797]: E0216 11:08:54.982802 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:54 crc kubenswrapper[4797]: I0216 11:08:54.982390 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:54 crc kubenswrapper[4797]: I0216 11:08:54.982347 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:54 crc kubenswrapper[4797]: E0216 11:08:54.982966 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:54 crc kubenswrapper[4797]: E0216 11:08:54.983049 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:54 crc kubenswrapper[4797]: E0216 11:08:54.983149 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:56 crc kubenswrapper[4797]: E0216 11:08:56.075524 4797 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:08:56 crc kubenswrapper[4797]: I0216 11:08:56.982278 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:56 crc kubenswrapper[4797]: I0216 11:08:56.982311 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:56 crc kubenswrapper[4797]: I0216 11:08:56.982290 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:56 crc kubenswrapper[4797]: E0216 11:08:56.982386 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:56 crc kubenswrapper[4797]: I0216 11:08:56.982276 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:56 crc kubenswrapper[4797]: E0216 11:08:56.982499 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:56 crc kubenswrapper[4797]: E0216 11:08:56.982704 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:56 crc kubenswrapper[4797]: E0216 11:08:56.983416 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:08:57 crc kubenswrapper[4797]: I0216 11:08:57.982886 4797 scope.go:117] "RemoveContainer" containerID="add78f37ddde7d8aaedb5783128c8f7f19f74ffe6ab10f54c85be98d5ec3bcbc" Feb 16 11:08:58 crc kubenswrapper[4797]: I0216 11:08:58.771131 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5qvbt_9532a098-7e41-454c-af48-44f9a9478d12/kube-multus/1.log" Feb 16 11:08:58 crc kubenswrapper[4797]: I0216 11:08:58.771185 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5qvbt" event={"ID":"9532a098-7e41-454c-af48-44f9a9478d12","Type":"ContainerStarted","Data":"75bb8a48b7bbd354f63efb913901a9ba447a87a652655d54697b2c03365b4699"} Feb 16 11:08:58 crc kubenswrapper[4797]: I0216 11:08:58.981893 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:08:58 crc kubenswrapper[4797]: I0216 11:08:58.981986 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:08:58 crc kubenswrapper[4797]: E0216 11:08:58.982047 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:08:58 crc kubenswrapper[4797]: I0216 11:08:58.982212 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:08:58 crc kubenswrapper[4797]: E0216 11:08:58.982262 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:08:58 crc kubenswrapper[4797]: E0216 11:08:58.982201 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:08:58 crc kubenswrapper[4797]: I0216 11:08:58.982299 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:08:58 crc kubenswrapper[4797]: E0216 11:08:58.982334 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:09:00 crc kubenswrapper[4797]: I0216 11:09:00.982320 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:09:00 crc kubenswrapper[4797]: I0216 11:09:00.982396 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:09:00 crc kubenswrapper[4797]: E0216 11:09:00.982491 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 11:09:00 crc kubenswrapper[4797]: I0216 11:09:00.982509 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:09:00 crc kubenswrapper[4797]: I0216 11:09:00.982531 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:09:00 crc kubenswrapper[4797]: E0216 11:09:00.982610 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 11:09:00 crc kubenswrapper[4797]: E0216 11:09:00.982716 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-cglwk" podUID="1f19a4ae-a737-4818-82b5-db20cafd45c7" Feb 16 11:09:00 crc kubenswrapper[4797]: E0216 11:09:00.982827 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 11:09:02 crc kubenswrapper[4797]: I0216 11:09:02.981905 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:09:02 crc kubenswrapper[4797]: I0216 11:09:02.981960 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:09:02 crc kubenswrapper[4797]: I0216 11:09:02.981928 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:09:02 crc kubenswrapper[4797]: I0216 11:09:02.981900 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:09:02 crc kubenswrapper[4797]: I0216 11:09:02.986188 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 11:09:02 crc kubenswrapper[4797]: I0216 11:09:02.986568 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 11:09:02 crc kubenswrapper[4797]: I0216 11:09:02.986620 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 11:09:02 crc kubenswrapper[4797]: I0216 11:09:02.986939 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 11:09:02 crc kubenswrapper[4797]: I0216 11:09:02.987224 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 11:09:02 crc kubenswrapper[4797]: I0216 11:09:02.987394 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.780568 4797 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.858158 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5qvbt" podStartSLOduration=118.858127148 podStartE2EDuration="1m58.858127148s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:08:58.78944972 +0000 UTC m=+133.509634700" watchObservedRunningTime="2026-02-16 11:09:06.858127148 +0000 UTC m=+141.578312168" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.860425 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mxqz2"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.861382 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.861870 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pvwfm"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.862507 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.863265 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.866786 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.869910 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.870745 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.873117 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnvtz"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.874969 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.875331 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.875998 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.876255 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.876445 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.876552 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.881922 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.882710 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.885934 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.886397 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.886605 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.887754 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.888708 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.888851 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.888985 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.889391 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.889563 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.889868 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.889975 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.890030 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.890171 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.890363 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-dxttg"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.890599 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nqzsz"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.890886 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.891205 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.891403 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dxttg" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.892966 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.893193 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.893323 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.893531 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.893828 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.894328 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.894449 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.894685 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.894862 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.895042 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.895105 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.895050 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.895373 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.898295 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nj877"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.898680 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.898795 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.899050 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tg9bq"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.899112 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.899230 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.899248 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.900340 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.900856 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.899488 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.901299 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.901430 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.901931 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.902080 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.901979 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.902266 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.902399 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.902308 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4d5np"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.903108 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.902556 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.903483 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.902727 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.903980 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.902810 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.902839 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.902869 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.903077 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.903135 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.903168 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.903193 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.903219 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.903240 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.903264 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.903288 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.912103 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.912294 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.912353 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.912488 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.912841 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.912857 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.912972 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.913032 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.913142 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.913348 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.913467 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.913497 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.913472 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.913593 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.913627 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.913909 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.914139 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hdft4"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.915093 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.915177 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.915292 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.915299 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-hdft4" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.915367 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.915427 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.915488 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.916052 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.916786 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.917410 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.917698 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.917494 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.917856 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.917740 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.917776 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.921963 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.935912 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.937075 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qtxzc"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.937773 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.938184 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.939536 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.954532 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.955813 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.958633 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.958661 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.959330 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.959201 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.959200 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.959230 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.959280 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.960227 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.961722 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.962065 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.962156 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.962896 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.965531 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.966236 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.966652 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.966951 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.967272 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.967634 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.967788 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.967808 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.967868 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.971147 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ckmh7"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.971352 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.972136 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.972450 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.973970 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.974017 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.974626 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.982451 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hmhhf"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.982883 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4v5ch"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.983253 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.983648 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.983732 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.984397 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wjqbl"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.984642 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.984859 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.990287 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.991280 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-28zck"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.992027 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4v5ch" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.992323 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.992494 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.992803 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wjqbl" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.995796 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.996014 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.998017 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.998415 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.998735 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9"] Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.999060 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:06 crc kubenswrapper[4797]: I0216 11:09:06.999672 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.000230 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.006365 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rgnb"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.007749 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qkrj2"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.008967 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.009434 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.012157 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.014956 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.016711 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7df1020-cfec-446c-8cee-66f3ed9a7f79-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8wzvf\" (UID: \"f7df1020-cfec-446c-8cee-66f3ed9a7f79\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.016799 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3699cc64-5615-4ce7-890a-d8fbed713b4c-serving-cert\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.016860 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmh5v\" (UniqueName: \"kubernetes.io/projected/3699cc64-5615-4ce7-890a-d8fbed713b4c-kube-api-access-qmh5v\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.016910 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.016936 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab-serving-cert\") pod \"console-operator-58897d9998-nqzsz\" (UID: \"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab\") " pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.016959 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab-config\") pod \"console-operator-58897d9998-nqzsz\" (UID: \"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab\") " pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017000 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0bf776c0-392b-4a88-86df-a31fc1538e5f-etcd-client\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017027 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4j22\" (UniqueName: \"kubernetes.io/projected/f7df1020-cfec-446c-8cee-66f3ed9a7f79-kube-api-access-m4j22\") pod \"openshift-apiserver-operator-796bbdcf4f-8wzvf\" (UID: \"f7df1020-cfec-446c-8cee-66f3ed9a7f79\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017057 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq6b9\" (UniqueName: \"kubernetes.io/projected/7d51c375-5f0e-49cd-86ff-f26eda853733-kube-api-access-hq6b9\") pod \"dns-operator-744455d44c-hdft4\" (UID: \"7d51c375-5f0e-49cd-86ff-f26eda853733\") " pod="openshift-dns-operator/dns-operator-744455d44c-hdft4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017081 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8483dc-9868-4194-9feb-488816a99fbe-service-ca-bundle\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017116 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-config\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017185 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-audit-policies\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017211 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ad05eae6-52a0-4044-a080-06cb3ebc5a04-audit-dir\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017247 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017276 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/931e6a97-a601-42c3-8b62-ef08752cf75c-images\") pod \"machine-api-operator-5694c8668f-pvwfm\" (UID: \"931e6a97-a601-42c3-8b62-ef08752cf75c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017304 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b1fe697-5783-4a83-b502-d9f25912c37c-config\") pod \"machine-approver-56656f9798-wnkfx\" (UID: \"3b1fe697-5783-4a83-b502-d9f25912c37c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017351 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8483dc-9868-4194-9feb-488816a99fbe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017374 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-service-ca\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017398 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrbb\" (UniqueName: \"kubernetes.io/projected/d2f2e6ac-38ac-41dd-b195-7fe50447270e-kube-api-access-9nrbb\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017427 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0bf776c0-392b-4a88-86df-a31fc1538e5f-audit-policies\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017452 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-image-import-ca\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017477 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3699cc64-5615-4ce7-890a-d8fbed713b4c-audit-dir\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.017502 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018507 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018552 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3699cc64-5615-4ce7-890a-d8fbed713b4c-etcd-client\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018569 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b1fe697-5783-4a83-b502-d9f25912c37c-auth-proxy-config\") pod \"machine-approver-56656f9798-wnkfx\" (UID: \"3b1fe697-5783-4a83-b502-d9f25912c37c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018614 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/931e6a97-a601-42c3-8b62-ef08752cf75c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pvwfm\" (UID: \"931e6a97-a601-42c3-8b62-ef08752cf75c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018646 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3b1fe697-5783-4a83-b502-d9f25912c37c-machine-approver-tls\") pod \"machine-approver-56656f9798-wnkfx\" (UID: \"3b1fe697-5783-4a83-b502-d9f25912c37c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018673 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-etcd-serving-ca\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018701 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018793 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-client-ca\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018832 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bf776c0-392b-4a88-86df-a31fc1538e5f-serving-cert\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018864 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b58dea-daa7-4b11-b6b9-c5a9471f1129-config\") pod \"route-controller-manager-6576b87f9c-dxrpc\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018880 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7tjl\" (UniqueName: \"kubernetes.io/projected/931e6a97-a601-42c3-8b62-ef08752cf75c-kube-api-access-t7tjl\") pod \"machine-api-operator-5694c8668f-pvwfm\" (UID: \"931e6a97-a601-42c3-8b62-ef08752cf75c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018899 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8483dc-9868-4194-9feb-488816a99fbe-config\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018919 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fqjv\" (UniqueName: \"kubernetes.io/projected/d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab-kube-api-access-4fqjv\") pod \"console-operator-58897d9998-nqzsz\" (UID: \"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab\") " pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018935 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpzg8\" (UniqueName: \"kubernetes.io/projected/ad05eae6-52a0-4044-a080-06cb3ebc5a04-kube-api-access-mpzg8\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018955 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97x74\" (UniqueName: \"kubernetes.io/projected/61891ace-57b4-446d-afb5-cec9848da89a-kube-api-access-97x74\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.018970 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba79c217-436f-4765-897e-95e388aed4b4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-p4ktc\" (UID: \"ba79c217-436f-4765-897e-95e388aed4b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019000 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwm5z\" (UniqueName: \"kubernetes.io/projected/3b1fe697-5783-4a83-b502-d9f25912c37c-kube-api-access-vwm5z\") pod \"machine-approver-56656f9798-wnkfx\" (UID: \"3b1fe697-5783-4a83-b502-d9f25912c37c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019022 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0bf776c0-392b-4a88-86df-a31fc1538e5f-audit-dir\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019042 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-console-config\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019056 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmftq\" (UniqueName: \"kubernetes.io/projected/ba79c217-436f-4765-897e-95e388aed4b4-kube-api-access-qmftq\") pod \"openshift-controller-manager-operator-756b6f6bc6-p4ktc\" (UID: \"ba79c217-436f-4765-897e-95e388aed4b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019091 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019113 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019140 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr44v\" (UniqueName: \"kubernetes.io/projected/8f0f2562-5ca4-414e-b8a4-d7ab61e9bc96-kube-api-access-vr44v\") pod \"downloads-7954f5f757-dxttg\" (UID: \"8f0f2562-5ca4-414e-b8a4-d7ab61e9bc96\") " pod="openshift-console/downloads-7954f5f757-dxttg" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019160 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0bf776c0-392b-4a88-86df-a31fc1538e5f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019174 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019193 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7df1020-cfec-446c-8cee-66f3ed9a7f79-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8wzvf\" (UID: \"f7df1020-cfec-446c-8cee-66f3ed9a7f79\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019207 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2f2e6ac-38ac-41dd-b195-7fe50447270e-serving-cert\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019231 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019247 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45b58dea-daa7-4b11-b6b9-c5a9471f1129-client-ca\") pod \"route-controller-manager-6576b87f9c-dxrpc\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019260 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8483dc-9868-4194-9feb-488816a99fbe-serving-cert\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019297 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-audit\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019315 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0bf776c0-392b-4a88-86df-a31fc1538e5f-encryption-config\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019332 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45b58dea-daa7-4b11-b6b9-c5a9471f1129-serving-cert\") pod \"route-controller-manager-6576b87f9c-dxrpc\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019346 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79c217-436f-4765-897e-95e388aed4b4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-p4ktc\" (UID: \"ba79c217-436f-4765-897e-95e388aed4b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019351 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019362 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51c375-5f0e-49cd-86ff-f26eda853733-metrics-tls\") pod \"dns-operator-744455d44c-hdft4\" (UID: \"7d51c375-5f0e-49cd-86ff-f26eda853733\") " pod="openshift-dns-operator/dns-operator-744455d44c-hdft4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019440 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-trusted-ca-bundle\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019454 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/931e6a97-a601-42c3-8b62-ef08752cf75c-config\") pod \"machine-api-operator-5694c8668f-pvwfm\" (UID: \"931e6a97-a601-42c3-8b62-ef08752cf75c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019468 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019482 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019504 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvn2k\" (UniqueName: \"kubernetes.io/projected/1d8483dc-9868-4194-9feb-488816a99fbe-kube-api-access-fvn2k\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019518 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc7vc\" (UniqueName: \"kubernetes.io/projected/0bf776c0-392b-4a88-86df-a31fc1538e5f-kube-api-access-hc7vc\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019534 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61891ace-57b4-446d-afb5-cec9848da89a-console-serving-cert\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019549 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019565 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019592 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61891ace-57b4-446d-afb5-cec9848da89a-console-oauth-config\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019606 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3699cc64-5615-4ce7-890a-d8fbed713b4c-encryption-config\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019619 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019631 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019664 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bf776c0-392b-4a88-86df-a31fc1538e5f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019686 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-oauth-serving-cert\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019704 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skvsl\" (UniqueName: \"kubernetes.io/projected/45b58dea-daa7-4b11-b6b9-c5a9471f1129-kube-api-access-skvsl\") pod \"route-controller-manager-6576b87f9c-dxrpc\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019711 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xrcnq"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019725 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3699cc64-5615-4ce7-890a-d8fbed713b4c-node-pullsecrets\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019741 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-config\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.019758 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab-trusted-ca\") pod \"console-operator-58897d9998-nqzsz\" (UID: \"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab\") " pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.020313 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xrcnq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.021253 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-4dv7z"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.021648 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4dv7z" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.023524 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnvtz"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.024065 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.024216 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4v5ch"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.026676 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.026995 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mxqz2"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.028260 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.030650 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4d5np"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.031220 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.032724 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tg9bq"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.034821 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nj877"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.035945 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hdft4"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.038021 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.038310 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.039775 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pvwfm"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.040817 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.041791 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.042943 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.044108 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.044107 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.047388 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wjqbl"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.049468 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qtxzc"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.051023 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-28zck"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.052881 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.055753 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.057793 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.060110 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.063643 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.064118 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.066639 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.067275 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ckmh7"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.068848 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8xchc"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.070115 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8xchc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.073080 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.075391 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.076810 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.078437 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nqzsz"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.080969 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dxttg"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.084197 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.084401 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kjc2z"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.085838 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.086004 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rgnb"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.087865 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8xchc"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.089914 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.099634 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xrcnq"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.100663 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.101620 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qkrj2"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.102846 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kjc2z"] Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.103758 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120232 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0bf776c0-392b-4a88-86df-a31fc1538e5f-audit-policies\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120262 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-image-import-ca\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120280 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3699cc64-5615-4ce7-890a-d8fbed713b4c-audit-dir\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120297 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120317 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/71883ef1-52ce-4531-8997-33fd0589cccf-srv-cert\") pod \"catalog-operator-68c6474976-72hd2\" (UID: \"71883ef1-52ce-4531-8997-33fd0589cccf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120333 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b1fe697-5783-4a83-b502-d9f25912c37c-auth-proxy-config\") pod \"machine-approver-56656f9798-wnkfx\" (UID: \"3b1fe697-5783-4a83-b502-d9f25912c37c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120352 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120367 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3699cc64-5615-4ce7-890a-d8fbed713b4c-etcd-client\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120383 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/931e6a97-a601-42c3-8b62-ef08752cf75c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pvwfm\" (UID: \"931e6a97-a601-42c3-8b62-ef08752cf75c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120398 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3b1fe697-5783-4a83-b502-d9f25912c37c-machine-approver-tls\") pod \"machine-approver-56656f9798-wnkfx\" (UID: \"3b1fe697-5783-4a83-b502-d9f25912c37c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120398 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3699cc64-5615-4ce7-890a-d8fbed713b4c-audit-dir\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120421 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-etcd-serving-ca\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120465 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef2f4d2-8723-4555-a4a4-eda869af0507-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kkhxz\" (UID: \"3ef2f4d2-8723-4555-a4a4-eda869af0507\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120513 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-client-ca\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120543 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bf776c0-392b-4a88-86df-a31fc1538e5f-serving-cert\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120571 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b58dea-daa7-4b11-b6b9-c5a9471f1129-config\") pod \"route-controller-manager-6576b87f9c-dxrpc\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120618 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7tjl\" (UniqueName: \"kubernetes.io/projected/931e6a97-a601-42c3-8b62-ef08752cf75c-kube-api-access-t7tjl\") pod \"machine-api-operator-5694c8668f-pvwfm\" (UID: \"931e6a97-a601-42c3-8b62-ef08752cf75c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120642 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8483dc-9868-4194-9feb-488816a99fbe-config\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120669 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/71883ef1-52ce-4531-8997-33fd0589cccf-profile-collector-cert\") pod \"catalog-operator-68c6474976-72hd2\" (UID: \"71883ef1-52ce-4531-8997-33fd0589cccf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120695 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fqjv\" (UniqueName: \"kubernetes.io/projected/d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab-kube-api-access-4fqjv\") pod \"console-operator-58897d9998-nqzsz\" (UID: \"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab\") " pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120718 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97x74\" (UniqueName: \"kubernetes.io/projected/61891ace-57b4-446d-afb5-cec9848da89a-kube-api-access-97x74\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120743 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpzg8\" (UniqueName: \"kubernetes.io/projected/ad05eae6-52a0-4044-a080-06cb3ebc5a04-kube-api-access-mpzg8\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120772 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba79c217-436f-4765-897e-95e388aed4b4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-p4ktc\" (UID: \"ba79c217-436f-4765-897e-95e388aed4b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120801 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwm5z\" (UniqueName: \"kubernetes.io/projected/3b1fe697-5783-4a83-b502-d9f25912c37c-kube-api-access-vwm5z\") pod \"machine-approver-56656f9798-wnkfx\" (UID: \"3b1fe697-5783-4a83-b502-d9f25912c37c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120829 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb92edf8-d734-4849-9dc1-26e8a68ce802-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5hhbn\" (UID: \"eb92edf8-d734-4849-9dc1-26e8a68ce802\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120857 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0bf776c0-392b-4a88-86df-a31fc1538e5f-audit-dir\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120865 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-etcd-serving-ca\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120880 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-console-config\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120919 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmftq\" (UniqueName: \"kubernetes.io/projected/ba79c217-436f-4765-897e-95e388aed4b4-kube-api-access-qmftq\") pod \"openshift-controller-manager-operator-756b6f6bc6-p4ktc\" (UID: \"ba79c217-436f-4765-897e-95e388aed4b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120961 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.120986 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.121011 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr44v\" (UniqueName: \"kubernetes.io/projected/8f0f2562-5ca4-414e-b8a4-d7ab61e9bc96-kube-api-access-vr44v\") pod \"downloads-7954f5f757-dxttg\" (UID: \"8f0f2562-5ca4-414e-b8a4-d7ab61e9bc96\") " pod="openshift-console/downloads-7954f5f757-dxttg" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.121080 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0bf776c0-392b-4a88-86df-a31fc1538e5f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.121149 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.121180 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7df1020-cfec-446c-8cee-66f3ed9a7f79-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8wzvf\" (UID: \"f7df1020-cfec-446c-8cee-66f3ed9a7f79\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.121219 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2f2e6ac-38ac-41dd-b195-7fe50447270e-serving-cert\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.121435 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-image-import-ca\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.121970 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122009 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45b58dea-daa7-4b11-b6b9-c5a9471f1129-client-ca\") pod \"route-controller-manager-6576b87f9c-dxrpc\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122031 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8483dc-9868-4194-9feb-488816a99fbe-serving-cert\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122054 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-audit\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122078 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0bf776c0-392b-4a88-86df-a31fc1538e5f-encryption-config\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122177 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45b58dea-daa7-4b11-b6b9-c5a9471f1129-serving-cert\") pod \"route-controller-manager-6576b87f9c-dxrpc\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122205 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79c217-436f-4765-897e-95e388aed4b4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-p4ktc\" (UID: \"ba79c217-436f-4765-897e-95e388aed4b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122228 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51c375-5f0e-49cd-86ff-f26eda853733-metrics-tls\") pod \"dns-operator-744455d44c-hdft4\" (UID: \"7d51c375-5f0e-49cd-86ff-f26eda853733\") " pod="openshift-dns-operator/dns-operator-744455d44c-hdft4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122255 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-trusted-ca-bundle\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122277 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/931e6a97-a601-42c3-8b62-ef08752cf75c-config\") pod \"machine-api-operator-5694c8668f-pvwfm\" (UID: \"931e6a97-a601-42c3-8b62-ef08752cf75c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122300 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122321 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122345 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvn2k\" (UniqueName: \"kubernetes.io/projected/1d8483dc-9868-4194-9feb-488816a99fbe-kube-api-access-fvn2k\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122367 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc7vc\" (UniqueName: \"kubernetes.io/projected/0bf776c0-392b-4a88-86df-a31fc1538e5f-kube-api-access-hc7vc\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122391 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb92edf8-d734-4849-9dc1-26e8a68ce802-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5hhbn\" (UID: \"eb92edf8-d734-4849-9dc1-26e8a68ce802\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122424 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61891ace-57b4-446d-afb5-cec9848da89a-console-serving-cert\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122447 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122467 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122488 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znkxn\" (UniqueName: \"kubernetes.io/projected/71883ef1-52ce-4531-8997-33fd0589cccf-kube-api-access-znkxn\") pod \"catalog-operator-68c6474976-72hd2\" (UID: \"71883ef1-52ce-4531-8997-33fd0589cccf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122512 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61891ace-57b4-446d-afb5-cec9848da89a-console-oauth-config\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122531 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3699cc64-5615-4ce7-890a-d8fbed713b4c-encryption-config\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122553 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122590 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ef2f4d2-8723-4555-a4a4-eda869af0507-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kkhxz\" (UID: \"3ef2f4d2-8723-4555-a4a4-eda869af0507\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122624 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bf776c0-392b-4a88-86df-a31fc1538e5f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122647 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-oauth-serving-cert\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122670 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skvsl\" (UniqueName: \"kubernetes.io/projected/45b58dea-daa7-4b11-b6b9-c5a9471f1129-kube-api-access-skvsl\") pod \"route-controller-manager-6576b87f9c-dxrpc\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122692 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3699cc64-5615-4ce7-890a-d8fbed713b4c-node-pullsecrets\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122713 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-config\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122737 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab-trusted-ca\") pod \"console-operator-58897d9998-nqzsz\" (UID: \"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab\") " pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122766 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7df1020-cfec-446c-8cee-66f3ed9a7f79-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8wzvf\" (UID: \"f7df1020-cfec-446c-8cee-66f3ed9a7f79\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122789 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3699cc64-5615-4ce7-890a-d8fbed713b4c-serving-cert\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122813 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmh5v\" (UniqueName: \"kubernetes.io/projected/3699cc64-5615-4ce7-890a-d8fbed713b4c-kube-api-access-qmh5v\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122836 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122860 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c2kw\" (UniqueName: \"kubernetes.io/projected/e60d9bf0-73bb-4eb5-ab0e-cce684085087-kube-api-access-2c2kw\") pod \"migrator-59844c95c7-4v5ch\" (UID: \"e60d9bf0-73bb-4eb5-ab0e-cce684085087\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4v5ch" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122885 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab-serving-cert\") pod \"console-operator-58897d9998-nqzsz\" (UID: \"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab\") " pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122920 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab-config\") pod \"console-operator-58897d9998-nqzsz\" (UID: \"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab\") " pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122944 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0bf776c0-392b-4a88-86df-a31fc1538e5f-etcd-client\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122964 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4j22\" (UniqueName: \"kubernetes.io/projected/f7df1020-cfec-446c-8cee-66f3ed9a7f79-kube-api-access-m4j22\") pod \"openshift-apiserver-operator-796bbdcf4f-8wzvf\" (UID: \"f7df1020-cfec-446c-8cee-66f3ed9a7f79\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.122987 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59c2l\" (UniqueName: \"kubernetes.io/projected/eb92edf8-d734-4849-9dc1-26e8a68ce802-kube-api-access-59c2l\") pod \"cluster-image-registry-operator-dc59b4c8b-5hhbn\" (UID: \"eb92edf8-d734-4849-9dc1-26e8a68ce802\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123010 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq6b9\" (UniqueName: \"kubernetes.io/projected/7d51c375-5f0e-49cd-86ff-f26eda853733-kube-api-access-hq6b9\") pod \"dns-operator-744455d44c-hdft4\" (UID: \"7d51c375-5f0e-49cd-86ff-f26eda853733\") " pod="openshift-dns-operator/dns-operator-744455d44c-hdft4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123030 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8483dc-9868-4194-9feb-488816a99fbe-service-ca-bundle\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123059 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/931e6a97-a601-42c3-8b62-ef08752cf75c-images\") pod \"machine-api-operator-5694c8668f-pvwfm\" (UID: \"931e6a97-a601-42c3-8b62-ef08752cf75c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123078 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-config\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123097 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-audit-policies\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123119 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ad05eae6-52a0-4044-a080-06cb3ebc5a04-audit-dir\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123146 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123172 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b1fe697-5783-4a83-b502-d9f25912c37c-config\") pod \"machine-approver-56656f9798-wnkfx\" (UID: \"3b1fe697-5783-4a83-b502-d9f25912c37c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123194 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8483dc-9868-4194-9feb-488816a99fbe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123227 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-service-ca\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123252 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nrbb\" (UniqueName: \"kubernetes.io/projected/d2f2e6ac-38ac-41dd-b195-7fe50447270e-kube-api-access-9nrbb\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123279 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef2f4d2-8723-4555-a4a4-eda869af0507-config\") pod \"kube-apiserver-operator-766d6c64bb-kkhxz\" (UID: \"3ef2f4d2-8723-4555-a4a4-eda869af0507\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123312 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb92edf8-d734-4849-9dc1-26e8a68ce802-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5hhbn\" (UID: \"eb92edf8-d734-4849-9dc1-26e8a68ce802\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.123523 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0bf776c0-392b-4a88-86df-a31fc1538e5f-audit-dir\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.124095 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0bf776c0-392b-4a88-86df-a31fc1538e5f-audit-policies\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.124133 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0bf776c0-392b-4a88-86df-a31fc1538e5f-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.124668 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-console-config\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.124691 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.124893 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ad05eae6-52a0-4044-a080-06cb3ebc5a04-audit-dir\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.125167 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45b58dea-daa7-4b11-b6b9-c5a9471f1129-client-ca\") pod \"route-controller-manager-6576b87f9c-dxrpc\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.125187 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b58dea-daa7-4b11-b6b9-c5a9471f1129-config\") pod \"route-controller-manager-6576b87f9c-dxrpc\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.125210 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.125318 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab-config\") pod \"console-operator-58897d9998-nqzsz\" (UID: \"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab\") " pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.125416 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3699cc64-5615-4ce7-890a-d8fbed713b4c-node-pullsecrets\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.126070 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bf776c0-392b-4a88-86df-a31fc1538e5f-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.126914 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-service-ca\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.127064 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-config\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.127403 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.127966 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba79c217-436f-4765-897e-95e388aed4b4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-p4ktc\" (UID: \"ba79c217-436f-4765-897e-95e388aed4b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.128081 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3699cc64-5615-4ce7-890a-d8fbed713b4c-audit\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.128105 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61891ace-57b4-446d-afb5-cec9848da89a-console-serving-cert\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.128255 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-oauth-serving-cert\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.128940 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab-trusted-ca\") pod \"console-operator-58897d9998-nqzsz\" (UID: \"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab\") " pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.129300 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-trusted-ca-bundle\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.130321 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45b58dea-daa7-4b11-b6b9-c5a9471f1129-serving-cert\") pod \"route-controller-manager-6576b87f9c-dxrpc\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.130328 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b1fe697-5783-4a83-b502-d9f25912c37c-config\") pod \"machine-approver-56656f9798-wnkfx\" (UID: \"3b1fe697-5783-4a83-b502-d9f25912c37c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.130709 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b1fe697-5783-4a83-b502-d9f25912c37c-auth-proxy-config\") pod \"machine-approver-56656f9798-wnkfx\" (UID: \"3b1fe697-5783-4a83-b502-d9f25912c37c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.130838 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d51c375-5f0e-49cd-86ff-f26eda853733-metrics-tls\") pod \"dns-operator-744455d44c-hdft4\" (UID: \"7d51c375-5f0e-49cd-86ff-f26eda853733\") " pod="openshift-dns-operator/dns-operator-744455d44c-hdft4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.130938 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-config\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.130959 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-client-ca\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.131138 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7df1020-cfec-446c-8cee-66f3ed9a7f79-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8wzvf\" (UID: \"f7df1020-cfec-446c-8cee-66f3ed9a7f79\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.132394 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3b1fe697-5783-4a83-b502-d9f25912c37c-machine-approver-tls\") pod \"machine-approver-56656f9798-wnkfx\" (UID: \"3b1fe697-5783-4a83-b502-d9f25912c37c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.132561 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab-serving-cert\") pod \"console-operator-58897d9998-nqzsz\" (UID: \"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab\") " pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.133159 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/931e6a97-a601-42c3-8b62-ef08752cf75c-images\") pod \"machine-api-operator-5694c8668f-pvwfm\" (UID: \"931e6a97-a601-42c3-8b62-ef08752cf75c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.133208 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/931e6a97-a601-42c3-8b62-ef08752cf75c-config\") pod \"machine-api-operator-5694c8668f-pvwfm\" (UID: \"931e6a97-a601-42c3-8b62-ef08752cf75c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.133324 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.133775 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.133804 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-audit-policies\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.134038 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.134475 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.134695 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba79c217-436f-4765-897e-95e388aed4b4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-p4ktc\" (UID: \"ba79c217-436f-4765-897e-95e388aed4b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.135094 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d8483dc-9868-4194-9feb-488816a99fbe-config\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.135210 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8483dc-9868-4194-9feb-488816a99fbe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.135305 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d8483dc-9868-4194-9feb-488816a99fbe-service-ca-bundle\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.135921 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2f2e6ac-38ac-41dd-b195-7fe50447270e-serving-cert\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.136058 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.137183 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61891ace-57b4-446d-afb5-cec9848da89a-console-oauth-config\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.138233 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bf776c0-392b-4a88-86df-a31fc1538e5f-serving-cert\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.139103 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3699cc64-5615-4ce7-890a-d8fbed713b4c-encryption-config\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.139636 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.139874 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0bf776c0-392b-4a88-86df-a31fc1538e5f-encryption-config\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.139938 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.140385 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.141130 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3699cc64-5615-4ce7-890a-d8fbed713b4c-etcd-client\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.141341 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.141938 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.142081 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d8483dc-9868-4194-9feb-488816a99fbe-serving-cert\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.142795 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3699cc64-5615-4ce7-890a-d8fbed713b4c-serving-cert\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.143786 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7df1020-cfec-446c-8cee-66f3ed9a7f79-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8wzvf\" (UID: \"f7df1020-cfec-446c-8cee-66f3ed9a7f79\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.144046 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/931e6a97-a601-42c3-8b62-ef08752cf75c-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-pvwfm\" (UID: \"931e6a97-a601-42c3-8b62-ef08752cf75c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.144278 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.144860 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0bf776c0-392b-4a88-86df-a31fc1538e5f-etcd-client\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.147121 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.167716 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.184273 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.204672 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.223805 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.224146 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znkxn\" (UniqueName: \"kubernetes.io/projected/71883ef1-52ce-4531-8997-33fd0589cccf-kube-api-access-znkxn\") pod \"catalog-operator-68c6474976-72hd2\" (UID: \"71883ef1-52ce-4531-8997-33fd0589cccf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.224182 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ef2f4d2-8723-4555-a4a4-eda869af0507-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kkhxz\" (UID: \"3ef2f4d2-8723-4555-a4a4-eda869af0507\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.224227 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c2kw\" (UniqueName: \"kubernetes.io/projected/e60d9bf0-73bb-4eb5-ab0e-cce684085087-kube-api-access-2c2kw\") pod \"migrator-59844c95c7-4v5ch\" (UID: \"e60d9bf0-73bb-4eb5-ab0e-cce684085087\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4v5ch" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.224267 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59c2l\" (UniqueName: \"kubernetes.io/projected/eb92edf8-d734-4849-9dc1-26e8a68ce802-kube-api-access-59c2l\") pod \"cluster-image-registry-operator-dc59b4c8b-5hhbn\" (UID: \"eb92edf8-d734-4849-9dc1-26e8a68ce802\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.224319 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef2f4d2-8723-4555-a4a4-eda869af0507-config\") pod \"kube-apiserver-operator-766d6c64bb-kkhxz\" (UID: \"3ef2f4d2-8723-4555-a4a4-eda869af0507\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.224344 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb92edf8-d734-4849-9dc1-26e8a68ce802-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5hhbn\" (UID: \"eb92edf8-d734-4849-9dc1-26e8a68ce802\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.224369 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/71883ef1-52ce-4531-8997-33fd0589cccf-srv-cert\") pod \"catalog-operator-68c6474976-72hd2\" (UID: \"71883ef1-52ce-4531-8997-33fd0589cccf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.224394 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef2f4d2-8723-4555-a4a4-eda869af0507-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kkhxz\" (UID: \"3ef2f4d2-8723-4555-a4a4-eda869af0507\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.224450 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/71883ef1-52ce-4531-8997-33fd0589cccf-profile-collector-cert\") pod \"catalog-operator-68c6474976-72hd2\" (UID: \"71883ef1-52ce-4531-8997-33fd0589cccf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.224493 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb92edf8-d734-4849-9dc1-26e8a68ce802-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5hhbn\" (UID: \"eb92edf8-d734-4849-9dc1-26e8a68ce802\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.224565 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb92edf8-d734-4849-9dc1-26e8a68ce802-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5hhbn\" (UID: \"eb92edf8-d734-4849-9dc1-26e8a68ce802\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.225172 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ef2f4d2-8723-4555-a4a4-eda869af0507-config\") pod \"kube-apiserver-operator-766d6c64bb-kkhxz\" (UID: \"3ef2f4d2-8723-4555-a4a4-eda869af0507\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.225529 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb92edf8-d734-4849-9dc1-26e8a68ce802-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-5hhbn\" (UID: \"eb92edf8-d734-4849-9dc1-26e8a68ce802\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.228107 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ef2f4d2-8723-4555-a4a4-eda869af0507-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-kkhxz\" (UID: \"3ef2f4d2-8723-4555-a4a4-eda869af0507\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.229065 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/eb92edf8-d734-4849-9dc1-26e8a68ce802-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-5hhbn\" (UID: \"eb92edf8-d734-4849-9dc1-26e8a68ce802\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.271844 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.284022 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.303290 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.323300 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.342995 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.363725 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.384280 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.404543 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.424545 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.444101 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.465122 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.485528 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.505410 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.524746 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.544141 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.564667 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.583786 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.605539 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.625521 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.645095 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.666031 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.685373 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.705037 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.725317 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.744840 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.765136 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.784884 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.804543 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.824415 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.844882 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.865358 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.884041 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.905337 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.924329 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.944521 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.964076 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 11:09:07 crc kubenswrapper[4797]: I0216 11:09:07.984294 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.002317 4797 request.go:700] Waited for 1.00949824s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.004890 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.024205 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.044527 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.064881 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.083908 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.104521 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.124872 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.138717 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/71883ef1-52ce-4531-8997-33fd0589cccf-srv-cert\") pod \"catalog-operator-68c6474976-72hd2\" (UID: \"71883ef1-52ce-4531-8997-33fd0589cccf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.144376 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.164106 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.169005 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/71883ef1-52ce-4531-8997-33fd0589cccf-profile-collector-cert\") pod \"catalog-operator-68c6474976-72hd2\" (UID: \"71883ef1-52ce-4531-8997-33fd0589cccf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.185302 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.204609 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.223835 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.244418 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.263974 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.285624 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.304244 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.323923 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.344341 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.364832 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.384491 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.404932 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.424161 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.444051 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.463815 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.484617 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.505032 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.524604 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.554907 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.564203 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.584832 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.624754 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.645073 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.664108 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.685569 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.704012 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.724482 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.749662 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.764004 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.784952 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.804038 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.824394 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.843779 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.864269 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.884210 4797 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.904029 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.955686 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpzg8\" (UniqueName: \"kubernetes.io/projected/ad05eae6-52a0-4044-a080-06cb3ebc5a04-kube-api-access-mpzg8\") pod \"oauth-openshift-558db77b4-nj877\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.967283 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97x74\" (UniqueName: \"kubernetes.io/projected/61891ace-57b4-446d-afb5-cec9848da89a-kube-api-access-97x74\") pod \"console-f9d7485db-4d5np\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:08 crc kubenswrapper[4797]: I0216 11:09:08.981462 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7tjl\" (UniqueName: \"kubernetes.io/projected/931e6a97-a601-42c3-8b62-ef08752cf75c-kube-api-access-t7tjl\") pod \"machine-api-operator-5694c8668f-pvwfm\" (UID: \"931e6a97-a601-42c3-8b62-ef08752cf75c\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.002564 4797 request.go:700] Waited for 1.879463566s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.004629 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fqjv\" (UniqueName: \"kubernetes.io/projected/d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab-kube-api-access-4fqjv\") pod \"console-operator-58897d9998-nqzsz\" (UID: \"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab\") " pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.015992 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.033226 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwm5z\" (UniqueName: \"kubernetes.io/projected/3b1fe697-5783-4a83-b502-d9f25912c37c-kube-api-access-vwm5z\") pod \"machine-approver-56656f9798-wnkfx\" (UID: \"3b1fe697-5783-4a83-b502-d9f25912c37c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.047390 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr44v\" (UniqueName: \"kubernetes.io/projected/8f0f2562-5ca4-414e-b8a4-d7ab61e9bc96-kube-api-access-vr44v\") pod \"downloads-7954f5f757-dxttg\" (UID: \"8f0f2562-5ca4-414e-b8a4-d7ab61e9bc96\") " pod="openshift-console/downloads-7954f5f757-dxttg" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.067121 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmh5v\" (UniqueName: \"kubernetes.io/projected/3699cc64-5615-4ce7-890a-d8fbed713b4c-kube-api-access-qmh5v\") pod \"apiserver-76f77b778f-mxqz2\" (UID: \"3699cc64-5615-4ce7-890a-d8fbed713b4c\") " pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.084162 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc7vc\" (UniqueName: \"kubernetes.io/projected/0bf776c0-392b-4a88-86df-a31fc1538e5f-kube-api-access-hc7vc\") pod \"apiserver-7bbb656c7d-gmmm4\" (UID: \"0bf776c0-392b-4a88-86df-a31fc1538e5f\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.104493 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4j22\" (UniqueName: \"kubernetes.io/projected/f7df1020-cfec-446c-8cee-66f3ed9a7f79-kube-api-access-m4j22\") pod \"openshift-apiserver-operator-796bbdcf4f-8wzvf\" (UID: \"f7df1020-cfec-446c-8cee-66f3ed9a7f79\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.114879 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.118156 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq6b9\" (UniqueName: \"kubernetes.io/projected/7d51c375-5f0e-49cd-86ff-f26eda853733-kube-api-access-hq6b9\") pod \"dns-operator-744455d44c-hdft4\" (UID: \"7d51c375-5f0e-49cd-86ff-f26eda853733\") " pod="openshift-dns-operator/dns-operator-744455d44c-hdft4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.127906 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:09 crc kubenswrapper[4797]: W0216 11:09:09.142510 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b1fe697_5783_4a83_b502_d9f25912c37c.slice/crio-f0b8852acafb4d2d14874bf6a5b8c2b989bbacf6802e4d3d115ec21f3ee3ae05 WatchSource:0}: Error finding container f0b8852acafb4d2d14874bf6a5b8c2b989bbacf6802e4d3d115ec21f3ee3ae05: Status 404 returned error can't find the container with id f0b8852acafb4d2d14874bf6a5b8c2b989bbacf6802e4d3d115ec21f3ee3ae05 Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.144890 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-dxttg" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.150203 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.161876 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nrbb\" (UniqueName: \"kubernetes.io/projected/d2f2e6ac-38ac-41dd-b195-7fe50447270e-kube-api-access-9nrbb\") pod \"controller-manager-879f6c89f-hnvtz\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.174313 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.176103 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-hdft4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.178298 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmftq\" (UniqueName: \"kubernetes.io/projected/ba79c217-436f-4765-897e-95e388aed4b4-kube-api-access-qmftq\") pod \"openshift-controller-manager-operator-756b6f6bc6-p4ktc\" (UID: \"ba79c217-436f-4765-897e-95e388aed4b4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.190744 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skvsl\" (UniqueName: \"kubernetes.io/projected/45b58dea-daa7-4b11-b6b9-c5a9471f1129-kube-api-access-skvsl\") pod \"route-controller-manager-6576b87f9c-dxrpc\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.227742 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvn2k\" (UniqueName: \"kubernetes.io/projected/1d8483dc-9868-4194-9feb-488816a99fbe-kube-api-access-fvn2k\") pod \"authentication-operator-69f744f599-tg9bq\" (UID: \"1d8483dc-9868-4194-9feb-488816a99fbe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.229241 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znkxn\" (UniqueName: \"kubernetes.io/projected/71883ef1-52ce-4531-8997-33fd0589cccf-kube-api-access-znkxn\") pod \"catalog-operator-68c6474976-72hd2\" (UID: \"71883ef1-52ce-4531-8997-33fd0589cccf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.245495 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ef2f4d2-8723-4555-a4a4-eda869af0507-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-kkhxz\" (UID: \"3ef2f4d2-8723-4555-a4a4-eda869af0507\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.267452 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59c2l\" (UniqueName: \"kubernetes.io/projected/eb92edf8-d734-4849-9dc1-26e8a68ce802-kube-api-access-59c2l\") pod \"cluster-image-registry-operator-dc59b4c8b-5hhbn\" (UID: \"eb92edf8-d734-4849-9dc1-26e8a68ce802\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.277529 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c2kw\" (UniqueName: \"kubernetes.io/projected/e60d9bf0-73bb-4eb5-ab0e-cce684085087-kube-api-access-2c2kw\") pod \"migrator-59844c95c7-4v5ch\" (UID: \"e60d9bf0-73bb-4eb5-ab0e-cce684085087\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4v5ch" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.300125 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.313203 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eb92edf8-d734-4849-9dc1-26e8a68ce802-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-5hhbn\" (UID: \"eb92edf8-d734-4849-9dc1-26e8a68ce802\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.316411 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4v5ch" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.335132 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.356778 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.358761 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdcv5\" (UniqueName: \"kubernetes.io/projected/a4a59a1f-9299-46dc-b904-3ec59cd68194-kube-api-access-fdcv5\") pod \"machine-config-operator-74547568cd-p2tmz\" (UID: \"a4a59a1f-9299-46dc-b904-3ec59cd68194\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.358788 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47d91d12-f724-453e-b5af-c0cb44777ef4-config\") pod \"service-ca-operator-777779d784-28zck\" (UID: \"47d91d12-f724-453e-b5af-c0cb44777ef4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.358811 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3e6d740e-c662-41a2-a815-0143fe9e7785-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x2lhj\" (UID: \"3e6d740e-c662-41a2-a815-0143fe9e7785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.358883 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c687cb5b-f367-4bba-b59a-bbe77beee146-metrics-certs\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.358922 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0b68c52a-173f-4415-9941-1f433247ee6f-signing-key\") pod \"service-ca-9c57cc56f-qkrj2\" (UID: \"0b68c52a-173f-4415-9941-1f433247ee6f\") " pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.358944 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/38cf7724-9e22-4b65-9362-4e712828808d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4w9q9\" (UID: \"38cf7724-9e22-4b65-9362-4e712828808d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.358966 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c687cb5b-f367-4bba-b59a-bbe77beee146-stats-auth\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359014 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd8g5\" (UniqueName: \"kubernetes.io/projected/c687cb5b-f367-4bba-b59a-bbe77beee146-kube-api-access-jd8g5\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359037 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a4a59a1f-9299-46dc-b904-3ec59cd68194-images\") pod \"machine-config-operator-74547568cd-p2tmz\" (UID: \"a4a59a1f-9299-46dc-b904-3ec59cd68194\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359101 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95fjr\" (UniqueName: \"kubernetes.io/projected/98432c03-3d6e-436b-a2de-5467c1e5f33b-kube-api-access-95fjr\") pod \"packageserver-d55dfcdfc-dnhj4\" (UID: \"98432c03-3d6e-436b-a2de-5467c1e5f33b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359127 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10a30ad2-b78d-4fa3-8f50-9bb0861f88ec-config\") pod \"kube-controller-manager-operator-78b949d7b-xlhh5\" (UID: \"10a30ad2-b78d-4fa3-8f50-9bb0861f88ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359175 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98432c03-3d6e-436b-a2de-5467c1e5f33b-webhook-cert\") pod \"packageserver-d55dfcdfc-dnhj4\" (UID: \"98432c03-3d6e-436b-a2de-5467c1e5f33b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359196 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft8cm\" (UniqueName: \"kubernetes.io/projected/47d91d12-f724-453e-b5af-c0cb44777ef4-kube-api-access-ft8cm\") pod \"service-ca-operator-777779d784-28zck\" (UID: \"47d91d12-f724-453e-b5af-c0cb44777ef4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359250 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a51263c-39fa-4c6f-9f1c-6b31707a67a8-proxy-tls\") pod \"machine-config-controller-84d6567774-6pbbr\" (UID: \"0a51263c-39fa-4c6f-9f1c-6b31707a67a8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359285 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d61b69d-a67d-4c60-9691-ccc3b8f24608-profile-collector-cert\") pod \"olm-operator-6b444d44fb-h68np\" (UID: \"3d61b69d-a67d-4c60-9691-ccc3b8f24608\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359306 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ccc8e92-b072-4c98-ba60-8cfbaeef1776-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6zm46\" (UID: \"8ccc8e92-b072-4c98-ba60-8cfbaeef1776\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359328 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14d96431-59d9-4550-a933-e94472bd3295-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztv28\" (UID: \"14d96431-59d9-4550-a933-e94472bd3295\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359351 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85-trusted-ca\") pod \"ingress-operator-5b745b69d9-rsvqr\" (UID: \"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359372 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840d6bcf-e97f-4804-9ed8-164475f990eb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-chmt4\" (UID: \"840d6bcf-e97f-4804-9ed8-164475f990eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359442 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-registry-certificates\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359463 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840d6bcf-e97f-4804-9ed8-164475f990eb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-chmt4\" (UID: \"840d6bcf-e97f-4804-9ed8-164475f990eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359515 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-bound-sa-token\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359538 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e6d740e-c662-41a2-a815-0143fe9e7785-serving-cert\") pod \"openshift-config-operator-7777fb866f-x2lhj\" (UID: \"3e6d740e-c662-41a2-a815-0143fe9e7785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359570 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx4m5\" (UniqueName: \"kubernetes.io/projected/3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85-kube-api-access-kx4m5\") pod \"ingress-operator-5b745b69d9-rsvqr\" (UID: \"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359613 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359635 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb64bae9-2b5d-4ad4-b184-36f36908713a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5rgnb\" (UID: \"cb64bae9-2b5d-4ad4-b184-36f36908713a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359693 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10a30ad2-b78d-4fa3-8f50-9bb0861f88ec-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xlhh5\" (UID: \"10a30ad2-b78d-4fa3-8f50-9bb0861f88ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359718 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-registry-tls\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359739 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d61b69d-a67d-4c60-9691-ccc3b8f24608-srv-cert\") pod \"olm-operator-6b444d44fb-h68np\" (UID: \"3d61b69d-a67d-4c60-9691-ccc3b8f24608\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359763 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e80deaa4-4f1c-4a94-9bac-cd4244a7d369-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rgs6z\" (UID: \"e80deaa4-4f1c-4a94-9bac-cd4244a7d369\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359786 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-config\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359820 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7d483c84-5b4f-4e05-aca6-526ff414a70c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wjqbl\" (UID: \"7d483c84-5b4f-4e05-aca6-526ff414a70c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wjqbl" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359841 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cb64bae9-2b5d-4ad4-b184-36f36908713a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5rgnb\" (UID: \"cb64bae9-2b5d-4ad4-b184-36f36908713a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359862 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0b68c52a-173f-4415-9941-1f433247ee6f-signing-cabundle\") pod \"service-ca-9c57cc56f-qkrj2\" (UID: \"0b68c52a-173f-4415-9941-1f433247ee6f\") " pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359899 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/840d6bcf-e97f-4804-9ed8-164475f990eb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-chmt4\" (UID: \"840d6bcf-e97f-4804-9ed8-164475f990eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359920 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4a59a1f-9299-46dc-b904-3ec59cd68194-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p2tmz\" (UID: \"a4a59a1f-9299-46dc-b904-3ec59cd68194\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359964 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-etcd-client\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.359985 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10a30ad2-b78d-4fa3-8f50-9bb0861f88ec-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xlhh5\" (UID: \"10a30ad2-b78d-4fa3-8f50-9bb0861f88ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360007 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6hjr\" (UniqueName: \"kubernetes.io/projected/8ccc8e92-b072-4c98-ba60-8cfbaeef1776-kube-api-access-b6hjr\") pod \"cluster-samples-operator-665b6dd947-6zm46\" (UID: \"8ccc8e92-b072-4c98-ba60-8cfbaeef1776\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360040 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skpxl\" (UniqueName: \"kubernetes.io/projected/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-kube-api-access-skpxl\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360062 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rsvqr\" (UID: \"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360132 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47d91d12-f724-453e-b5af-c0cb44777ef4-serving-cert\") pod \"service-ca-operator-777779d784-28zck\" (UID: \"47d91d12-f724-453e-b5af-c0cb44777ef4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360181 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14d96431-59d9-4550-a933-e94472bd3295-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztv28\" (UID: \"14d96431-59d9-4550-a933-e94472bd3295\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360203 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-etcd-service-ca\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360306 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5pzg\" (UniqueName: \"kubernetes.io/projected/3d61b69d-a67d-4c60-9691-ccc3b8f24608-kube-api-access-k5pzg\") pod \"olm-operator-6b444d44fb-h68np\" (UID: \"3d61b69d-a67d-4c60-9691-ccc3b8f24608\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360329 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85-metrics-tls\") pod \"ingress-operator-5b745b69d9-rsvqr\" (UID: \"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360377 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf2n6\" (UniqueName: \"kubernetes.io/projected/7d483c84-5b4f-4e05-aca6-526ff414a70c-kube-api-access-sf2n6\") pod \"multus-admission-controller-857f4d67dd-wjqbl\" (UID: \"7d483c84-5b4f-4e05-aca6-526ff414a70c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wjqbl" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360398 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6gvs\" (UniqueName: \"kubernetes.io/projected/cb64bae9-2b5d-4ad4-b184-36f36908713a-kube-api-access-d6gvs\") pod \"marketplace-operator-79b997595-5rgnb\" (UID: \"cb64bae9-2b5d-4ad4-b184-36f36908713a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360453 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360479 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxn59\" (UniqueName: \"kubernetes.io/projected/e80deaa4-4f1c-4a94-9bac-cd4244a7d369-kube-api-access-vxn59\") pod \"control-plane-machine-set-operator-78cbb6b69f-rgs6z\" (UID: \"e80deaa4-4f1c-4a94-9bac-cd4244a7d369\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360502 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqqvt\" (UniqueName: \"kubernetes.io/projected/38cf7724-9e22-4b65-9362-4e712828808d-kube-api-access-nqqvt\") pod \"package-server-manager-789f6589d5-4w9q9\" (UID: \"38cf7724-9e22-4b65-9362-4e712828808d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360529 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7kwz\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-kube-api-access-c7kwz\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360551 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c687cb5b-f367-4bba-b59a-bbe77beee146-default-certificate\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360613 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/98432c03-3d6e-436b-a2de-5467c1e5f33b-tmpfs\") pod \"packageserver-d55dfcdfc-dnhj4\" (UID: \"98432c03-3d6e-436b-a2de-5467c1e5f33b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360634 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jknzp\" (UniqueName: \"kubernetes.io/projected/3e6d740e-c662-41a2-a815-0143fe9e7785-kube-api-access-jknzp\") pod \"openshift-config-operator-7777fb866f-x2lhj\" (UID: \"3e6d740e-c662-41a2-a815-0143fe9e7785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360701 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-serving-cert\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360723 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98432c03-3d6e-436b-a2de-5467c1e5f33b-apiservice-cert\") pod \"packageserver-d55dfcdfc-dnhj4\" (UID: \"98432c03-3d6e-436b-a2de-5467c1e5f33b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360774 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvt26\" (UniqueName: \"kubernetes.io/projected/14d96431-59d9-4550-a933-e94472bd3295-kube-api-access-lvt26\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztv28\" (UID: \"14d96431-59d9-4550-a933-e94472bd3295\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360807 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-etcd-ca\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360830 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv6zg\" (UniqueName: \"kubernetes.io/projected/0a51263c-39fa-4c6f-9f1c-6b31707a67a8-kube-api-access-pv6zg\") pod \"machine-config-controller-84d6567774-6pbbr\" (UID: \"0a51263c-39fa-4c6f-9f1c-6b31707a67a8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360853 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-trusted-ca\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360873 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c687cb5b-f367-4bba-b59a-bbe77beee146-service-ca-bundle\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360911 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360953 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4a59a1f-9299-46dc-b904-3ec59cd68194-proxy-tls\") pod \"machine-config-operator-74547568cd-p2tmz\" (UID: \"a4a59a1f-9299-46dc-b904-3ec59cd68194\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.360999 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ldv5\" (UniqueName: \"kubernetes.io/projected/0b68c52a-173f-4415-9941-1f433247ee6f-kube-api-access-2ldv5\") pod \"service-ca-9c57cc56f-qkrj2\" (UID: \"0b68c52a-173f-4415-9941-1f433247ee6f\") " pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.361020 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a51263c-39fa-4c6f-9f1c-6b31707a67a8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6pbbr\" (UID: \"0a51263c-39fa-4c6f-9f1c-6b31707a67a8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" Feb 16 11:09:09 crc kubenswrapper[4797]: E0216 11:09:09.366340 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:09.866317036 +0000 UTC m=+144.586502016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.374111 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.380781 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-pvwfm"] Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.398041 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.400180 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.436121 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.460558 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.461380 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.461549 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47d91d12-f724-453e-b5af-c0cb44777ef4-serving-cert\") pod \"service-ca-operator-777779d784-28zck\" (UID: \"47d91d12-f724-453e-b5af-c0cb44777ef4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.461595 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14d96431-59d9-4550-a933-e94472bd3295-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztv28\" (UID: \"14d96431-59d9-4550-a933-e94472bd3295\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.461621 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-etcd-service-ca\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462199 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4602521d-4d8a-4753-b873-13a315c7ae18-config-volume\") pod \"dns-default-8xchc\" (UID: \"4602521d-4d8a-4753-b873-13a315c7ae18\") " pod="openshift-dns/dns-default-8xchc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462238 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5pzg\" (UniqueName: \"kubernetes.io/projected/3d61b69d-a67d-4c60-9691-ccc3b8f24608-kube-api-access-k5pzg\") pod \"olm-operator-6b444d44fb-h68np\" (UID: \"3d61b69d-a67d-4c60-9691-ccc3b8f24608\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462258 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85-metrics-tls\") pod \"ingress-operator-5b745b69d9-rsvqr\" (UID: \"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462285 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f673c7b-0916-4829-9630-1f927c932254-config-volume\") pod \"collect-profiles-29520660-pm2zr\" (UID: \"7f673c7b-0916-4829-9630-1f927c932254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462318 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf2n6\" (UniqueName: \"kubernetes.io/projected/7d483c84-5b4f-4e05-aca6-526ff414a70c-kube-api-access-sf2n6\") pod \"multus-admission-controller-857f4d67dd-wjqbl\" (UID: \"7d483c84-5b4f-4e05-aca6-526ff414a70c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wjqbl" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462342 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6gvs\" (UniqueName: \"kubernetes.io/projected/cb64bae9-2b5d-4ad4-b184-36f36908713a-kube-api-access-d6gvs\") pod \"marketplace-operator-79b997595-5rgnb\" (UID: \"cb64bae9-2b5d-4ad4-b184-36f36908713a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462364 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn2pg\" (UniqueName: \"kubernetes.io/projected/4c5682bf-873f-4b33-ad1e-a518eedb1f6b-kube-api-access-cn2pg\") pod \"machine-config-server-4dv7z\" (UID: \"4c5682bf-873f-4b33-ad1e-a518eedb1f6b\") " pod="openshift-machine-config-operator/machine-config-server-4dv7z" Feb 16 11:09:09 crc kubenswrapper[4797]: E0216 11:09:09.462398 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:09.962365531 +0000 UTC m=+144.682550511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462495 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462536 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxn59\" (UniqueName: \"kubernetes.io/projected/e80deaa4-4f1c-4a94-9bac-cd4244a7d369-kube-api-access-vxn59\") pod \"control-plane-machine-set-operator-78cbb6b69f-rgs6z\" (UID: \"e80deaa4-4f1c-4a94-9bac-cd4244a7d369\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462566 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqqvt\" (UniqueName: \"kubernetes.io/projected/38cf7724-9e22-4b65-9362-4e712828808d-kube-api-access-nqqvt\") pod \"package-server-manager-789f6589d5-4w9q9\" (UID: \"38cf7724-9e22-4b65-9362-4e712828808d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462679 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7kwz\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-kube-api-access-c7kwz\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462740 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c687cb5b-f367-4bba-b59a-bbe77beee146-default-certificate\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462802 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/98432c03-3d6e-436b-a2de-5467c1e5f33b-tmpfs\") pod \"packageserver-d55dfcdfc-dnhj4\" (UID: \"98432c03-3d6e-436b-a2de-5467c1e5f33b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462829 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jknzp\" (UniqueName: \"kubernetes.io/projected/3e6d740e-c662-41a2-a815-0143fe9e7785-kube-api-access-jknzp\") pod \"openshift-config-operator-7777fb866f-x2lhj\" (UID: \"3e6d740e-c662-41a2-a815-0143fe9e7785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462861 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4c5682bf-873f-4b33-ad1e-a518eedb1f6b-certs\") pod \"machine-config-server-4dv7z\" (UID: \"4c5682bf-873f-4b33-ad1e-a518eedb1f6b\") " pod="openshift-machine-config-operator/machine-config-server-4dv7z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462907 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98432c03-3d6e-436b-a2de-5467c1e5f33b-apiservice-cert\") pod \"packageserver-d55dfcdfc-dnhj4\" (UID: \"98432c03-3d6e-436b-a2de-5467c1e5f33b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462938 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-serving-cert\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.462984 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvt26\" (UniqueName: \"kubernetes.io/projected/14d96431-59d9-4550-a933-e94472bd3295-kube-api-access-lvt26\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztv28\" (UID: \"14d96431-59d9-4550-a933-e94472bd3295\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463020 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-etcd-ca\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463053 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv6zg\" (UniqueName: \"kubernetes.io/projected/0a51263c-39fa-4c6f-9f1c-6b31707a67a8-kube-api-access-pv6zg\") pod \"machine-config-controller-84d6567774-6pbbr\" (UID: \"0a51263c-39fa-4c6f-9f1c-6b31707a67a8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463084 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8v5b\" (UniqueName: \"kubernetes.io/projected/261bff34-cd36-4214-880f-231fa0f1679b-kube-api-access-l8v5b\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463141 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c687cb5b-f367-4bba-b59a-bbe77beee146-service-ca-bundle\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463191 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-trusted-ca\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463228 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463257 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-registration-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463267 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14d96431-59d9-4550-a933-e94472bd3295-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztv28\" (UID: \"14d96431-59d9-4550-a933-e94472bd3295\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463297 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgdrm\" (UniqueName: \"kubernetes.io/projected/4602521d-4d8a-4753-b873-13a315c7ae18-kube-api-access-hgdrm\") pod \"dns-default-8xchc\" (UID: \"4602521d-4d8a-4753-b873-13a315c7ae18\") " pod="openshift-dns/dns-default-8xchc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463356 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4a59a1f-9299-46dc-b904-3ec59cd68194-proxy-tls\") pod \"machine-config-operator-74547568cd-p2tmz\" (UID: \"a4a59a1f-9299-46dc-b904-3ec59cd68194\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463377 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-socket-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463432 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ldv5\" (UniqueName: \"kubernetes.io/projected/0b68c52a-173f-4415-9941-1f433247ee6f-kube-api-access-2ldv5\") pod \"service-ca-9c57cc56f-qkrj2\" (UID: \"0b68c52a-173f-4415-9941-1f433247ee6f\") " pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463455 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a51263c-39fa-4c6f-9f1c-6b31707a67a8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6pbbr\" (UID: \"0a51263c-39fa-4c6f-9f1c-6b31707a67a8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463479 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ld54\" (UniqueName: \"kubernetes.io/projected/7f673c7b-0916-4829-9630-1f927c932254-kube-api-access-7ld54\") pod \"collect-profiles-29520660-pm2zr\" (UID: \"7f673c7b-0916-4829-9630-1f927c932254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463521 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdcv5\" (UniqueName: \"kubernetes.io/projected/a4a59a1f-9299-46dc-b904-3ec59cd68194-kube-api-access-fdcv5\") pod \"machine-config-operator-74547568cd-p2tmz\" (UID: \"a4a59a1f-9299-46dc-b904-3ec59cd68194\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463543 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47d91d12-f724-453e-b5af-c0cb44777ef4-config\") pod \"service-ca-operator-777779d784-28zck\" (UID: \"47d91d12-f724-453e-b5af-c0cb44777ef4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463565 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3e6d740e-c662-41a2-a815-0143fe9e7785-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x2lhj\" (UID: \"3e6d740e-c662-41a2-a815-0143fe9e7785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463623 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-plugins-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463666 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c687cb5b-f367-4bba-b59a-bbe77beee146-metrics-certs\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463696 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8d063f9-5090-4321-85d8-739107bcd8da-cert\") pod \"ingress-canary-xrcnq\" (UID: \"a8d063f9-5090-4321-85d8-739107bcd8da\") " pod="openshift-ingress-canary/ingress-canary-xrcnq" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463724 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0b68c52a-173f-4415-9941-1f433247ee6f-signing-key\") pod \"service-ca-9c57cc56f-qkrj2\" (UID: \"0b68c52a-173f-4415-9941-1f433247ee6f\") " pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463755 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c687cb5b-f367-4bba-b59a-bbe77beee146-stats-auth\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463782 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/38cf7724-9e22-4b65-9362-4e712828808d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4w9q9\" (UID: \"38cf7724-9e22-4b65-9362-4e712828808d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463851 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd8g5\" (UniqueName: \"kubernetes.io/projected/c687cb5b-f367-4bba-b59a-bbe77beee146-kube-api-access-jd8g5\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463872 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a4a59a1f-9299-46dc-b904-3ec59cd68194-images\") pod \"machine-config-operator-74547568cd-p2tmz\" (UID: \"a4a59a1f-9299-46dc-b904-3ec59cd68194\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463925 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95fjr\" (UniqueName: \"kubernetes.io/projected/98432c03-3d6e-436b-a2de-5467c1e5f33b-kube-api-access-95fjr\") pod \"packageserver-d55dfcdfc-dnhj4\" (UID: \"98432c03-3d6e-436b-a2de-5467c1e5f33b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463946 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10a30ad2-b78d-4fa3-8f50-9bb0861f88ec-config\") pod \"kube-controller-manager-operator-78b949d7b-xlhh5\" (UID: \"10a30ad2-b78d-4fa3-8f50-9bb0861f88ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463976 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98432c03-3d6e-436b-a2de-5467c1e5f33b-webhook-cert\") pod \"packageserver-d55dfcdfc-dnhj4\" (UID: \"98432c03-3d6e-436b-a2de-5467c1e5f33b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.463997 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft8cm\" (UniqueName: \"kubernetes.io/projected/47d91d12-f724-453e-b5af-c0cb44777ef4-kube-api-access-ft8cm\") pod \"service-ca-operator-777779d784-28zck\" (UID: \"47d91d12-f724-453e-b5af-c0cb44777ef4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464042 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a51263c-39fa-4c6f-9f1c-6b31707a67a8-proxy-tls\") pod \"machine-config-controller-84d6567774-6pbbr\" (UID: \"0a51263c-39fa-4c6f-9f1c-6b31707a67a8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" Feb 16 11:09:09 crc kubenswrapper[4797]: E0216 11:09:09.464073 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:09.964053137 +0000 UTC m=+144.684238307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464109 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ccc8e92-b072-4c98-ba60-8cfbaeef1776-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6zm46\" (UID: \"8ccc8e92-b072-4c98-ba60-8cfbaeef1776\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464165 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d61b69d-a67d-4c60-9691-ccc3b8f24608-profile-collector-cert\") pod \"olm-operator-6b444d44fb-h68np\" (UID: \"3d61b69d-a67d-4c60-9691-ccc3b8f24608\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464195 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4c5682bf-873f-4b33-ad1e-a518eedb1f6b-node-bootstrap-token\") pod \"machine-config-server-4dv7z\" (UID: \"4c5682bf-873f-4b33-ad1e-a518eedb1f6b\") " pod="openshift-machine-config-operator/machine-config-server-4dv7z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464222 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14d96431-59d9-4550-a933-e94472bd3295-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztv28\" (UID: \"14d96431-59d9-4550-a933-e94472bd3295\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464246 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85-trusted-ca\") pod \"ingress-operator-5b745b69d9-rsvqr\" (UID: \"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464274 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840d6bcf-e97f-4804-9ed8-164475f990eb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-chmt4\" (UID: \"840d6bcf-e97f-4804-9ed8-164475f990eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464322 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-csi-data-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464354 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-registry-certificates\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464376 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840d6bcf-e97f-4804-9ed8-164475f990eb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-chmt4\" (UID: \"840d6bcf-e97f-4804-9ed8-164475f990eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464427 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-bound-sa-token\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464455 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e6d740e-c662-41a2-a815-0143fe9e7785-serving-cert\") pod \"openshift-config-operator-7777fb866f-x2lhj\" (UID: \"3e6d740e-c662-41a2-a815-0143fe9e7785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464477 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4602521d-4d8a-4753-b873-13a315c7ae18-metrics-tls\") pod \"dns-default-8xchc\" (UID: \"4602521d-4d8a-4753-b873-13a315c7ae18\") " pod="openshift-dns/dns-default-8xchc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464503 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx4m5\" (UniqueName: \"kubernetes.io/projected/3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85-kube-api-access-kx4m5\") pod \"ingress-operator-5b745b69d9-rsvqr\" (UID: \"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464531 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464554 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb64bae9-2b5d-4ad4-b184-36f36908713a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5rgnb\" (UID: \"cb64bae9-2b5d-4ad4-b184-36f36908713a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464601 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10a30ad2-b78d-4fa3-8f50-9bb0861f88ec-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xlhh5\" (UID: \"10a30ad2-b78d-4fa3-8f50-9bb0861f88ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464628 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e80deaa4-4f1c-4a94-9bac-cd4244a7d369-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rgs6z\" (UID: \"e80deaa4-4f1c-4a94-9bac-cd4244a7d369\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464656 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-config\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464692 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-registry-tls\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464713 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d61b69d-a67d-4c60-9691-ccc3b8f24608-srv-cert\") pod \"olm-operator-6b444d44fb-h68np\" (UID: \"3d61b69d-a67d-4c60-9691-ccc3b8f24608\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464735 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cb64bae9-2b5d-4ad4-b184-36f36908713a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5rgnb\" (UID: \"cb64bae9-2b5d-4ad4-b184-36f36908713a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464759 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7d483c84-5b4f-4e05-aca6-526ff414a70c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wjqbl\" (UID: \"7d483c84-5b4f-4e05-aca6-526ff414a70c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wjqbl" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464787 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0b68c52a-173f-4415-9941-1f433247ee6f-signing-cabundle\") pod \"service-ca-9c57cc56f-qkrj2\" (UID: \"0b68c52a-173f-4415-9941-1f433247ee6f\") " pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464813 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4a59a1f-9299-46dc-b904-3ec59cd68194-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p2tmz\" (UID: \"a4a59a1f-9299-46dc-b904-3ec59cd68194\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464835 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-mountpoint-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464873 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/840d6bcf-e97f-4804-9ed8-164475f990eb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-chmt4\" (UID: \"840d6bcf-e97f-4804-9ed8-164475f990eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464895 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10a30ad2-b78d-4fa3-8f50-9bb0861f88ec-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xlhh5\" (UID: \"10a30ad2-b78d-4fa3-8f50-9bb0861f88ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.464917 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6hjr\" (UniqueName: \"kubernetes.io/projected/8ccc8e92-b072-4c98-ba60-8cfbaeef1776-kube-api-access-b6hjr\") pod \"cluster-samples-operator-665b6dd947-6zm46\" (UID: \"8ccc8e92-b072-4c98-ba60-8cfbaeef1776\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.470190 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47d91d12-f724-453e-b5af-c0cb44777ef4-serving-cert\") pod \"service-ca-operator-777779d784-28zck\" (UID: \"47d91d12-f724-453e-b5af-c0cb44777ef4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.470249 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-etcd-client\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.470334 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skpxl\" (UniqueName: \"kubernetes.io/projected/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-kube-api-access-skpxl\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.470394 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rsvqr\" (UID: \"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.470452 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f673c7b-0916-4829-9630-1f927c932254-secret-volume\") pod \"collect-profiles-29520660-pm2zr\" (UID: \"7f673c7b-0916-4829-9630-1f927c932254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.470503 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbwc4\" (UniqueName: \"kubernetes.io/projected/a8d063f9-5090-4321-85d8-739107bcd8da-kube-api-access-gbwc4\") pod \"ingress-canary-xrcnq\" (UID: \"a8d063f9-5090-4321-85d8-739107bcd8da\") " pod="openshift-ingress-canary/ingress-canary-xrcnq" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.476306 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.478828 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0b68c52a-173f-4415-9941-1f433247ee6f-signing-key\") pod \"service-ca-9c57cc56f-qkrj2\" (UID: \"0b68c52a-173f-4415-9941-1f433247ee6f\") " pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.480288 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0a51263c-39fa-4c6f-9f1c-6b31707a67a8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-6pbbr\" (UID: \"0a51263c-39fa-4c6f-9f1c-6b31707a67a8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.480559 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10a30ad2-b78d-4fa3-8f50-9bb0861f88ec-config\") pod \"kube-controller-manager-operator-78b949d7b-xlhh5\" (UID: \"10a30ad2-b78d-4fa3-8f50-9bb0861f88ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.480894 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47d91d12-f724-453e-b5af-c0cb44777ef4-config\") pod \"service-ca-operator-777779d784-28zck\" (UID: \"47d91d12-f724-453e-b5af-c0cb44777ef4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.480973 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a4a59a1f-9299-46dc-b904-3ec59cd68194-images\") pod \"machine-config-operator-74547568cd-p2tmz\" (UID: \"a4a59a1f-9299-46dc-b904-3ec59cd68194\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.481486 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4a59a1f-9299-46dc-b904-3ec59cd68194-auth-proxy-config\") pod \"machine-config-operator-74547568cd-p2tmz\" (UID: \"a4a59a1f-9299-46dc-b904-3ec59cd68194\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.490173 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-etcd-service-ca\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.490385 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85-metrics-tls\") pod \"ingress-operator-5b745b69d9-rsvqr\" (UID: \"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.490822 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/98432c03-3d6e-436b-a2de-5467c1e5f33b-tmpfs\") pod \"packageserver-d55dfcdfc-dnhj4\" (UID: \"98432c03-3d6e-436b-a2de-5467c1e5f33b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.491373 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4a59a1f-9299-46dc-b904-3ec59cd68194-proxy-tls\") pod \"machine-config-operator-74547568cd-p2tmz\" (UID: \"a4a59a1f-9299-46dc-b904-3ec59cd68194\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.494390 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.495568 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3e6d740e-c662-41a2-a815-0143fe9e7785-available-featuregates\") pod \"openshift-config-operator-7777fb866f-x2lhj\" (UID: \"3e6d740e-c662-41a2-a815-0143fe9e7785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.496111 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0a51263c-39fa-4c6f-9f1c-6b31707a67a8-proxy-tls\") pod \"machine-config-controller-84d6567774-6pbbr\" (UID: \"0a51263c-39fa-4c6f-9f1c-6b31707a67a8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.496840 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-etcd-ca\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.498993 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c687cb5b-f367-4bba-b59a-bbe77beee146-service-ca-bundle\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.499292 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb64bae9-2b5d-4ad4-b184-36f36908713a-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5rgnb\" (UID: \"cb64bae9-2b5d-4ad4-b184-36f36908713a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.499717 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85-trusted-ca\") pod \"ingress-operator-5b745b69d9-rsvqr\" (UID: \"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.500701 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0b68c52a-173f-4415-9941-1f433247ee6f-signing-cabundle\") pod \"service-ca-9c57cc56f-qkrj2\" (UID: \"0b68c52a-173f-4415-9941-1f433247ee6f\") " pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.502220 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/840d6bcf-e97f-4804-9ed8-164475f990eb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-chmt4\" (UID: \"840d6bcf-e97f-4804-9ed8-164475f990eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.511421 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-config\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.513371 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.514158 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.515222 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-registry-certificates\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.517019 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-trusted-ca\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.518688 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/38cf7724-9e22-4b65-9362-4e712828808d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4w9q9\" (UID: \"38cf7724-9e22-4b65-9362-4e712828808d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.519621 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-serving-cert\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.519886 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c687cb5b-f367-4bba-b59a-bbe77beee146-stats-auth\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.519970 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7d483c84-5b4f-4e05-aca6-526ff414a70c-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wjqbl\" (UID: \"7d483c84-5b4f-4e05-aca6-526ff414a70c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wjqbl" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.520201 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/98432c03-3d6e-436b-a2de-5467c1e5f33b-webhook-cert\") pod \"packageserver-d55dfcdfc-dnhj4\" (UID: \"98432c03-3d6e-436b-a2de-5467c1e5f33b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.520285 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14d96431-59d9-4550-a933-e94472bd3295-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztv28\" (UID: \"14d96431-59d9-4550-a933-e94472bd3295\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.520422 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c687cb5b-f367-4bba-b59a-bbe77beee146-metrics-certs\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.520557 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-etcd-client\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.520640 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3d61b69d-a67d-4c60-9691-ccc3b8f24608-srv-cert\") pod \"olm-operator-6b444d44fb-h68np\" (UID: \"3d61b69d-a67d-4c60-9691-ccc3b8f24608\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.521285 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c687cb5b-f367-4bba-b59a-bbe77beee146-default-certificate\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.522393 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10a30ad2-b78d-4fa3-8f50-9bb0861f88ec-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-xlhh5\" (UID: \"10a30ad2-b78d-4fa3-8f50-9bb0861f88ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.525408 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e80deaa4-4f1c-4a94-9bac-cd4244a7d369-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-rgs6z\" (UID: \"e80deaa4-4f1c-4a94-9bac-cd4244a7d369\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.525837 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/8ccc8e92-b072-4c98-ba60-8cfbaeef1776-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6zm46\" (UID: \"8ccc8e92-b072-4c98-ba60-8cfbaeef1776\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.526537 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3d61b69d-a67d-4c60-9691-ccc3b8f24608-profile-collector-cert\") pod \"olm-operator-6b444d44fb-h68np\" (UID: \"3d61b69d-a67d-4c60-9691-ccc3b8f24608\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.540052 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/98432c03-3d6e-436b-a2de-5467c1e5f33b-apiservice-cert\") pod \"packageserver-d55dfcdfc-dnhj4\" (UID: \"98432c03-3d6e-436b-a2de-5467c1e5f33b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.548069 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/840d6bcf-e97f-4804-9ed8-164475f990eb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-chmt4\" (UID: \"840d6bcf-e97f-4804-9ed8-164475f990eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.548986 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf2n6\" (UniqueName: \"kubernetes.io/projected/7d483c84-5b4f-4e05-aca6-526ff414a70c-kube-api-access-sf2n6\") pod \"multus-admission-controller-857f4d67dd-wjqbl\" (UID: \"7d483c84-5b4f-4e05-aca6-526ff414a70c\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wjqbl" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.549796 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e6d740e-c662-41a2-a815-0143fe9e7785-serving-cert\") pod \"openshift-config-operator-7777fb866f-x2lhj\" (UID: \"3e6d740e-c662-41a2-a815-0143fe9e7785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.550273 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-registry-tls\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.552982 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cb64bae9-2b5d-4ad4-b184-36f36908713a-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5rgnb\" (UID: \"cb64bae9-2b5d-4ad4-b184-36f36908713a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.553708 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6gvs\" (UniqueName: \"kubernetes.io/projected/cb64bae9-2b5d-4ad4-b184-36f36908713a-kube-api-access-d6gvs\") pod \"marketplace-operator-79b997595-5rgnb\" (UID: \"cb64bae9-2b5d-4ad4-b184-36f36908713a\") " pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.554498 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5pzg\" (UniqueName: \"kubernetes.io/projected/3d61b69d-a67d-4c60-9691-ccc3b8f24608-kube-api-access-k5pzg\") pod \"olm-operator-6b444d44fb-h68np\" (UID: \"3d61b69d-a67d-4c60-9691-ccc3b8f24608\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.563679 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skpxl\" (UniqueName: \"kubernetes.io/projected/228b4e9e-a51f-4fce-af91-4af93c9f3aa6-kube-api-access-skpxl\") pod \"etcd-operator-b45778765-qtxzc\" (UID: \"228b4e9e-a51f-4fce-af91-4af93c9f3aa6\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.571261 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.571882 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-mountpoint-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.571934 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f673c7b-0916-4829-9630-1f927c932254-secret-volume\") pod \"collect-profiles-29520660-pm2zr\" (UID: \"7f673c7b-0916-4829-9630-1f927c932254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.571965 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbwc4\" (UniqueName: \"kubernetes.io/projected/a8d063f9-5090-4321-85d8-739107bcd8da-kube-api-access-gbwc4\") pod \"ingress-canary-xrcnq\" (UID: \"a8d063f9-5090-4321-85d8-739107bcd8da\") " pod="openshift-ingress-canary/ingress-canary-xrcnq" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572002 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4602521d-4d8a-4753-b873-13a315c7ae18-config-volume\") pod \"dns-default-8xchc\" (UID: \"4602521d-4d8a-4753-b873-13a315c7ae18\") " pod="openshift-dns/dns-default-8xchc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572028 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f673c7b-0916-4829-9630-1f927c932254-config-volume\") pod \"collect-profiles-29520660-pm2zr\" (UID: \"7f673c7b-0916-4829-9630-1f927c932254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572053 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn2pg\" (UniqueName: \"kubernetes.io/projected/4c5682bf-873f-4b33-ad1e-a518eedb1f6b-kube-api-access-cn2pg\") pod \"machine-config-server-4dv7z\" (UID: \"4c5682bf-873f-4b33-ad1e-a518eedb1f6b\") " pod="openshift-machine-config-operator/machine-config-server-4dv7z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572116 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4c5682bf-873f-4b33-ad1e-a518eedb1f6b-certs\") pod \"machine-config-server-4dv7z\" (UID: \"4c5682bf-873f-4b33-ad1e-a518eedb1f6b\") " pod="openshift-machine-config-operator/machine-config-server-4dv7z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572169 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8v5b\" (UniqueName: \"kubernetes.io/projected/261bff34-cd36-4214-880f-231fa0f1679b-kube-api-access-l8v5b\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572198 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-registration-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572221 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgdrm\" (UniqueName: \"kubernetes.io/projected/4602521d-4d8a-4753-b873-13a315c7ae18-kube-api-access-hgdrm\") pod \"dns-default-8xchc\" (UID: \"4602521d-4d8a-4753-b873-13a315c7ae18\") " pod="openshift-dns/dns-default-8xchc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572256 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-socket-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572288 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ld54\" (UniqueName: \"kubernetes.io/projected/7f673c7b-0916-4829-9630-1f927c932254-kube-api-access-7ld54\") pod \"collect-profiles-29520660-pm2zr\" (UID: \"7f673c7b-0916-4829-9630-1f927c932254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572319 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-plugins-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572340 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8d063f9-5090-4321-85d8-739107bcd8da-cert\") pod \"ingress-canary-xrcnq\" (UID: \"a8d063f9-5090-4321-85d8-739107bcd8da\") " pod="openshift-ingress-canary/ingress-canary-xrcnq" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572403 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4c5682bf-873f-4b33-ad1e-a518eedb1f6b-node-bootstrap-token\") pod \"machine-config-server-4dv7z\" (UID: \"4c5682bf-873f-4b33-ad1e-a518eedb1f6b\") " pod="openshift-machine-config-operator/machine-config-server-4dv7z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572427 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-csi-data-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.572461 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4602521d-4d8a-4753-b873-13a315c7ae18-metrics-tls\") pod \"dns-default-8xchc\" (UID: \"4602521d-4d8a-4753-b873-13a315c7ae18\") " pod="openshift-dns/dns-default-8xchc" Feb 16 11:09:09 crc kubenswrapper[4797]: E0216 11:09:09.573833 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:10.073813204 +0000 UTC m=+144.793998184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.573885 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-mountpoint-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.574311 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4602521d-4d8a-4753-b873-13a315c7ae18-config-volume\") pod \"dns-default-8xchc\" (UID: \"4602521d-4d8a-4753-b873-13a315c7ae18\") " pod="openshift-dns/dns-default-8xchc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.574643 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-registration-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.574768 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-socket-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.575482 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4602521d-4d8a-4753-b873-13a315c7ae18-metrics-tls\") pod \"dns-default-8xchc\" (UID: \"4602521d-4d8a-4753-b873-13a315c7ae18\") " pod="openshift-dns/dns-default-8xchc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.576982 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f673c7b-0916-4829-9630-1f927c932254-config-volume\") pod \"collect-profiles-29520660-pm2zr\" (UID: \"7f673c7b-0916-4829-9630-1f927c932254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.578135 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f673c7b-0916-4829-9630-1f927c932254-secret-volume\") pod \"collect-profiles-29520660-pm2zr\" (UID: \"7f673c7b-0916-4829-9630-1f927c932254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.579876 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4c5682bf-873f-4b33-ad1e-a518eedb1f6b-node-bootstrap-token\") pod \"machine-config-server-4dv7z\" (UID: \"4c5682bf-873f-4b33-ad1e-a518eedb1f6b\") " pod="openshift-machine-config-operator/machine-config-server-4dv7z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.580066 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-csi-data-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.581832 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rsvqr\" (UID: \"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.582062 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/261bff34-cd36-4214-880f-231fa0f1679b-plugins-dir\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.582627 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a8d063f9-5090-4321-85d8-739107bcd8da-cert\") pod \"ingress-canary-xrcnq\" (UID: \"a8d063f9-5090-4321-85d8-739107bcd8da\") " pod="openshift-ingress-canary/ingress-canary-xrcnq" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.598935 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4c5682bf-873f-4b33-ad1e-a518eedb1f6b-certs\") pod \"machine-config-server-4dv7z\" (UID: \"4c5682bf-873f-4b33-ad1e-a518eedb1f6b\") " pod="openshift-machine-config-operator/machine-config-server-4dv7z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.604422 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jknzp\" (UniqueName: \"kubernetes.io/projected/3e6d740e-c662-41a2-a815-0143fe9e7785-kube-api-access-jknzp\") pod \"openshift-config-operator-7777fb866f-x2lhj\" (UID: \"3e6d740e-c662-41a2-a815-0143fe9e7785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.620331 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxn59\" (UniqueName: \"kubernetes.io/projected/e80deaa4-4f1c-4a94-9bac-cd4244a7d369-kube-api-access-vxn59\") pod \"control-plane-machine-set-operator-78cbb6b69f-rgs6z\" (UID: \"e80deaa4-4f1c-4a94-9bac-cd4244a7d369\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.636372 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nqzsz"] Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.644848 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqqvt\" (UniqueName: \"kubernetes.io/projected/38cf7724-9e22-4b65-9362-4e712828808d-kube-api-access-nqqvt\") pod \"package-server-manager-789f6589d5-4w9q9\" (UID: \"38cf7724-9e22-4b65-9362-4e712828808d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.648883 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wjqbl" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.662748 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7kwz\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-kube-api-access-c7kwz\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.664344 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.671293 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.673699 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: E0216 11:09:09.674568 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:10.174553996 +0000 UTC m=+144.894738976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.682638 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd8g5\" (UniqueName: \"kubernetes.io/projected/c687cb5b-f367-4bba-b59a-bbe77beee146-kube-api-access-jd8g5\") pod \"router-default-5444994796-hmhhf\" (UID: \"c687cb5b-f367-4bba-b59a-bbe77beee146\") " pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.688215 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.703905 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.710329 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ldv5\" (UniqueName: \"kubernetes.io/projected/0b68c52a-173f-4415-9941-1f433247ee6f-kube-api-access-2ldv5\") pod \"service-ca-9c57cc56f-qkrj2\" (UID: \"0b68c52a-173f-4415-9941-1f433247ee6f\") " pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.712161 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.735163 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95fjr\" (UniqueName: \"kubernetes.io/projected/98432c03-3d6e-436b-a2de-5467c1e5f33b-kube-api-access-95fjr\") pod \"packageserver-d55dfcdfc-dnhj4\" (UID: \"98432c03-3d6e-436b-a2de-5467c1e5f33b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.743945 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdcv5\" (UniqueName: \"kubernetes.io/projected/a4a59a1f-9299-46dc-b904-3ec59cd68194-kube-api-access-fdcv5\") pod \"machine-config-operator-74547568cd-p2tmz\" (UID: \"a4a59a1f-9299-46dc-b904-3ec59cd68194\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.768158 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft8cm\" (UniqueName: \"kubernetes.io/projected/47d91d12-f724-453e-b5af-c0cb44777ef4-kube-api-access-ft8cm\") pod \"service-ca-operator-777779d784-28zck\" (UID: \"47d91d12-f724-453e-b5af-c0cb44777ef4\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.775337 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:09 crc kubenswrapper[4797]: E0216 11:09:09.775712 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:10.27569585 +0000 UTC m=+144.995880830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.778896 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hdft4"] Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.790208 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.806103 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/840d6bcf-e97f-4804-9ed8-164475f990eb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-chmt4\" (UID: \"840d6bcf-e97f-4804-9ed8-164475f990eb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.807433 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.841822 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv6zg\" (UniqueName: \"kubernetes.io/projected/0a51263c-39fa-4c6f-9f1c-6b31707a67a8-kube-api-access-pv6zg\") pod \"machine-config-controller-84d6567774-6pbbr\" (UID: \"0a51263c-39fa-4c6f-9f1c-6b31707a67a8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.850908 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvt26\" (UniqueName: \"kubernetes.io/projected/14d96431-59d9-4550-a933-e94472bd3295-kube-api-access-lvt26\") pod \"kube-storage-version-migrator-operator-b67b599dd-ztv28\" (UID: \"14d96431-59d9-4550-a933-e94472bd3295\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.865244 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/10a30ad2-b78d-4fa3-8f50-9bb0861f88ec-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-xlhh5\" (UID: \"10a30ad2-b78d-4fa3-8f50-9bb0861f88ec\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.872511 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hdft4" event={"ID":"7d51c375-5f0e-49cd-86ff-f26eda853733","Type":"ContainerStarted","Data":"169dda967c00a12e386d1af6234a5dd7653bb6cc4031fb29775631518c29c9db"} Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.873443 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nqzsz" event={"ID":"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab","Type":"ContainerStarted","Data":"677195f2b6f666a12643226f082c51eedd76c927c975ec76c7606985939c1a5d"} Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.874848 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" event={"ID":"931e6a97-a601-42c3-8b62-ef08752cf75c","Type":"ContainerStarted","Data":"c866be1c72e4bef8295ccd9e38f52cd158e97b54a7192ee4cfda85087b6ffb29"} Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.874913 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" event={"ID":"931e6a97-a601-42c3-8b62-ef08752cf75c","Type":"ContainerStarted","Data":"a495e837a82a6f4a47dcb6d704f773d5a0867d271bbd04072534d43747637d8b"} Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.875140 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" event={"ID":"931e6a97-a601-42c3-8b62-ef08752cf75c","Type":"ContainerStarted","Data":"4d18a054aeeea200a43f6a8bd45f73a90b9d9635304d51934fd3ccadd3663182"} Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.875928 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" event={"ID":"3b1fe697-5783-4a83-b502-d9f25912c37c","Type":"ContainerStarted","Data":"09f1f0f1bb0fd199e5946f7d748228f4e3fd7617facb56439f9df159b4550529"} Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.875958 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" event={"ID":"3b1fe697-5783-4a83-b502-d9f25912c37c","Type":"ContainerStarted","Data":"f0b8852acafb4d2d14874bf6a5b8c2b989bbacf6802e4d3d115ec21f3ee3ae05"} Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.876415 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: E0216 11:09:09.876826 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:10.376814872 +0000 UTC m=+145.096999852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.878220 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.883460 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6hjr\" (UniqueName: \"kubernetes.io/projected/8ccc8e92-b072-4c98-ba60-8cfbaeef1776-kube-api-access-b6hjr\") pod \"cluster-samples-operator-665b6dd947-6zm46\" (UID: \"8ccc8e92-b072-4c98-ba60-8cfbaeef1776\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.893514 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.896643 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-bound-sa-token\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.899495 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.918065 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.920464 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx4m5\" (UniqueName: \"kubernetes.io/projected/3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85-kube-api-access-kx4m5\") pod \"ingress-operator-5b745b69d9-rsvqr\" (UID: \"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.926720 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.960095 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbwc4\" (UniqueName: \"kubernetes.io/projected/a8d063f9-5090-4321-85d8-739107bcd8da-kube-api-access-gbwc4\") pod \"ingress-canary-xrcnq\" (UID: \"a8d063f9-5090-4321-85d8-739107bcd8da\") " pod="openshift-ingress-canary/ingress-canary-xrcnq" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.971410 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-dxttg"] Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.973094 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8v5b\" (UniqueName: \"kubernetes.io/projected/261bff34-cd36-4214-880f-231fa0f1679b-kube-api-access-l8v5b\") pod \"csi-hostpathplugin-kjc2z\" (UID: \"261bff34-cd36-4214-880f-231fa0f1679b\") " pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.977920 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:09 crc kubenswrapper[4797]: E0216 11:09:09.978056 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:10.478038928 +0000 UTC m=+145.198223908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.978234 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:09 crc kubenswrapper[4797]: E0216 11:09:09.978990 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:10.478982814 +0000 UTC m=+145.199167794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.980301 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.991811 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ld54\" (UniqueName: \"kubernetes.io/projected/7f673c7b-0916-4829-9630-1f927c932254-kube-api-access-7ld54\") pod \"collect-profiles-29520660-pm2zr\" (UID: \"7f673c7b-0916-4829-9630-1f927c932254\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" Feb 16 11:09:09 crc kubenswrapper[4797]: I0216 11:09:09.994762 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.002133 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nj877"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.002176 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4d5np"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.010370 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgdrm\" (UniqueName: \"kubernetes.io/projected/4602521d-4d8a-4753-b873-13a315c7ae18-kube-api-access-hgdrm\") pod \"dns-default-8xchc\" (UID: \"4602521d-4d8a-4753-b873-13a315c7ae18\") " pod="openshift-dns/dns-default-8xchc" Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.018974 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn2pg\" (UniqueName: \"kubernetes.io/projected/4c5682bf-873f-4b33-ad1e-a518eedb1f6b-kube-api-access-cn2pg\") pod \"machine-config-server-4dv7z\" (UID: \"4c5682bf-873f-4b33-ad1e-a518eedb1f6b\") " pod="openshift-machine-config-operator/machine-config-server-4dv7z" Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.019498 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.027912 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xrcnq" Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.038928 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-4dv7z" Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.043015 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8xchc" Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.063399 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.079046 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:10 crc kubenswrapper[4797]: E0216 11:09:10.079370 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:10.579353775 +0000 UTC m=+145.299538755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.113022 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.120255 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.131683 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46" Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.180432 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:10 crc kubenswrapper[4797]: E0216 11:09:10.180873 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:10.680854209 +0000 UTC m=+145.401039229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.281559 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:10 crc kubenswrapper[4797]: E0216 11:09:10.282806 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:10.782772643 +0000 UTC m=+145.502957623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.322199 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnvtz"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.331792 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.333640 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.336385 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.359817 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mxqz2"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.361184 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4v5ch"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.373028 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.383343 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:10 crc kubenswrapper[4797]: E0216 11:09:10.383679 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:10.88366571 +0000 UTC m=+145.603850690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.484441 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:10 crc kubenswrapper[4797]: E0216 11:09:10.485284 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:10.985259315 +0000 UTC m=+145.705444295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.586349 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:10 crc kubenswrapper[4797]: E0216 11:09:10.586678 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:11.086660396 +0000 UTC m=+145.806845376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.687413 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:10 crc kubenswrapper[4797]: E0216 11:09:10.688749 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:11.188720874 +0000 UTC m=+145.908905854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.797197 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:10 crc kubenswrapper[4797]: E0216 11:09:10.797553 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:11.297541885 +0000 UTC m=+146.017726855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.797845 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rgnb"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.829170 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.856141 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qtxzc"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.857968 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.858112 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-pvwfm" podStartSLOduration=122.858101467 podStartE2EDuration="2m2.858101467s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:10.848889618 +0000 UTC m=+145.569074598" watchObservedRunningTime="2026-02-16 11:09:10.858101467 +0000 UTC m=+145.578286447" Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.882945 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-tg9bq"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.888544 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.890982 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.893240 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.895478 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-qkrj2"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.896227 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wjqbl"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.901176 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:10 crc kubenswrapper[4797]: E0216 11:09:10.901784 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:11.401767152 +0000 UTC m=+146.121952132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.917363 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4dv7z" event={"ID":"4c5682bf-873f-4b33-ad1e-a518eedb1f6b","Type":"ContainerStarted","Data":"9919a5cfdef3ac66afd48d66116d25a202520ebfc635701affd3b9e1b7142ac2"} Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.917765 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-4dv7z" event={"ID":"4c5682bf-873f-4b33-ad1e-a518eedb1f6b","Type":"ContainerStarted","Data":"6fc9fba9eb89828d08b4bd7d90a30a83c9df25e474168797cfa8e36df38eb5f3"} Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.919795 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.927281 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj"] Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.935801 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4d5np" event={"ID":"61891ace-57b4-446d-afb5-cec9848da89a","Type":"ContainerStarted","Data":"e2ccfde0b7bee3cda4fe60ce9a6fa75995b17bc372202e7314c4ae0e0edd8ffe"} Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.942482 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hmhhf" event={"ID":"c687cb5b-f367-4bba-b59a-bbe77beee146","Type":"ContainerStarted","Data":"9353a65a7e5aa3111ad0cbcc9b742fb0250ff14528f21e45dc3d8fb5da8e8081"} Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.942523 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hmhhf" event={"ID":"c687cb5b-f367-4bba-b59a-bbe77beee146","Type":"ContainerStarted","Data":"e56d8b52b588f289f5a3b8e3d548b3877a004811950c227742ed66462e37fee4"} Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.959062 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nj877" event={"ID":"ad05eae6-52a0-4044-a080-06cb3ebc5a04","Type":"ContainerStarted","Data":"99cfed1bb60762ef56aca77fd98c91256f2e6e9dffcd5e14fee280ab872edb93"} Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.962182 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" event={"ID":"d2f2e6ac-38ac-41dd-b195-7fe50447270e","Type":"ContainerStarted","Data":"520ed31b85ed2bd43075376cbca9e3fcb39833a02605484e45f426a9f831ec56"} Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.965460 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" event={"ID":"ba79c217-436f-4765-897e-95e388aed4b4","Type":"ContainerStarted","Data":"d09020f57e35e1a24353ad0b8b6aa6307893c6fa4fb86a01431a7567761b14a6"} Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.970080 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" event={"ID":"3699cc64-5615-4ce7-890a-d8fbed713b4c","Type":"ContainerStarted","Data":"ae1cde7d60ccc0691e3ddf07c559484fee8a0a65fb77fc3b7f73ee4f8e9e1dc2"} Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.976092 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" event={"ID":"3b1fe697-5783-4a83-b502-d9f25912c37c","Type":"ContainerStarted","Data":"5e2af04743f4ea52e914bc1e4677a352962a95737c9301752267c4c72590c214"} Feb 16 11:09:10 crc kubenswrapper[4797]: W0216 11:09:10.980457 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod228b4e9e_a51f_4fce_af91_4af93c9f3aa6.slice/crio-e1454d8d8013d40554af94216634591ca4574bbaecadccede67ba9a4014e5a6f WatchSource:0}: Error finding container e1454d8d8013d40554af94216634591ca4574bbaecadccede67ba9a4014e5a6f: Status 404 returned error can't find the container with id e1454d8d8013d40554af94216634591ca4574bbaecadccede67ba9a4014e5a6f Feb 16 11:09:10 crc kubenswrapper[4797]: I0216 11:09:10.986983 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4"] Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.002471 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:11 crc kubenswrapper[4797]: E0216 11:09:11.003477 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:11.50345915 +0000 UTC m=+146.223644210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.006247 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" event={"ID":"45b58dea-daa7-4b11-b6b9-c5a9471f1129","Type":"ContainerStarted","Data":"246504eb927279a1e92fe562a0bcd7ca44503a1c78b72301fcaf7e5e165a79a3"} Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.011140 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4v5ch" event={"ID":"e60d9bf0-73bb-4eb5-ab0e-cce684085087","Type":"ContainerStarted","Data":"ec5f9413b22158e849a410f67bc84f0e4c9652cd4bcd9dddd65a25b79d1c4931"} Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.025364 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nqzsz" event={"ID":"d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab","Type":"ContainerStarted","Data":"d1959d0d650854d4dc4a89309009c086a33809a47ea994f37b90df0941d2f25d"} Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.026544 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28"] Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.027950 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.030201 4797 patch_prober.go:28] interesting pod/console-operator-58897d9998-nqzsz container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.030265 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nqzsz" podUID="d0da0ff7-fad2-4e07-a2ac-c298d5e7d5ab" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.032464 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4"] Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.042267 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hdft4" event={"ID":"7d51c375-5f0e-49cd-86ff-f26eda853733","Type":"ContainerStarted","Data":"a2013ac764d9a7b9fc09cd0c9f2e49e377ac49efe73de7ffaaf919226346f46b"} Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.044115 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr"] Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.047255 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" event={"ID":"cb64bae9-2b5d-4ad4-b184-36f36908713a","Type":"ContainerStarted","Data":"72c93867d259dc89bb5f98be2f92c24a2377924d4a55b672272945c1429748cc"} Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.050669 4797 csr.go:261] certificate signing request csr-g6wcw is approved, waiting to be issued Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.053430 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dxttg" event={"ID":"8f0f2562-5ca4-414e-b8a4-d7ab61e9bc96","Type":"ContainerStarted","Data":"bc471c8dbf632f2af2e034bdf94a4505fe4bf27a52446dbd8b6bf235181c35cf"} Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.056236 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-dxttg" Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.058893 4797 patch_prober.go:28] interesting pod/downloads-7954f5f757-dxttg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.059002 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dxttg" podUID="8f0f2562-5ca4-414e-b8a4-d7ab61e9bc96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 16 11:09:11 crc kubenswrapper[4797]: W0216 11:09:11.060553 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a51263c_39fa_4c6f_9f1c_6b31707a67a8.slice/crio-d9b3cdd7db84bc5afd820f36440017a97024cc928cc21041b36a3e04961707c9 WatchSource:0}: Error finding container d9b3cdd7db84bc5afd820f36440017a97024cc928cc21041b36a3e04961707c9: Status 404 returned error can't find the container with id d9b3cdd7db84bc5afd820f36440017a97024cc928cc21041b36a3e04961707c9 Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.061079 4797 csr.go:257] certificate signing request csr-g6wcw is issued Feb 16 11:09:11 crc kubenswrapper[4797]: W0216 11:09:11.067746 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14d96431_59d9_4550_a933_e94472bd3295.slice/crio-6592009477ec945b930ceae165041afbeadbc25de6a983898bda55edaaf2d5a4 WatchSource:0}: Error finding container 6592009477ec945b930ceae165041afbeadbc25de6a983898bda55edaaf2d5a4: Status 404 returned error can't find the container with id 6592009477ec945b930ceae165041afbeadbc25de6a983898bda55edaaf2d5a4 Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.070445 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-28zck"] Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.076015 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" event={"ID":"71883ef1-52ce-4531-8997-33fd0589cccf","Type":"ContainerStarted","Data":"081fd51452f366f04d4cd0b7a054f48357719f1ba47f62d9abba68b25a756621"} Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.076550 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.076598 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz"] Feb 16 11:09:11 crc kubenswrapper[4797]: W0216 11:09:11.079000 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod840d6bcf_e97f_4804_9ed8_164475f990eb.slice/crio-b45b0c3b6b500fd65b617be19aaaf8fd0caa2f4d0ff1038ac6acb0377fab83bf WatchSource:0}: Error finding container b45b0c3b6b500fd65b617be19aaaf8fd0caa2f4d0ff1038ac6acb0377fab83bf: Status 404 returned error can't find the container with id b45b0c3b6b500fd65b617be19aaaf8fd0caa2f4d0ff1038ac6acb0377fab83bf Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.079714 4797 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-72hd2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.079750 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" podUID="71883ef1-52ce-4531-8997-33fd0589cccf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.080801 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" event={"ID":"0bf776c0-392b-4a88-86df-a31fc1538e5f","Type":"ContainerStarted","Data":"db5b55b30f8de221ac0ed22d0a6782dc17949fb29816050ff3378da529e7066c"} Feb 16 11:09:11 crc kubenswrapper[4797]: W0216 11:09:11.101620 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4a59a1f_9299_46dc_b904_3ec59cd68194.slice/crio-dde6eabc0571ad92e53c151819b1336de2cade6342d560387728cba496bc10eb WatchSource:0}: Error finding container dde6eabc0571ad92e53c151819b1336de2cade6342d560387728cba496bc10eb: Status 404 returned error can't find the container with id dde6eabc0571ad92e53c151819b1336de2cade6342d560387728cba496bc10eb Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.103683 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:11 crc kubenswrapper[4797]: E0216 11:09:11.105127 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:11.605109588 +0000 UTC m=+146.325294568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.204532 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:11 crc kubenswrapper[4797]: E0216 11:09:11.205467 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:11.705457019 +0000 UTC m=+146.425641999 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.265872 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46"] Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.284348 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xrcnq"] Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.290674 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5"] Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.306617 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.306681 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8xchc"] Feb 16 11:09:11 crc kubenswrapper[4797]: E0216 11:09:11.306989 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:11.806974903 +0000 UTC m=+146.527159873 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.323959 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr"] Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.325906 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-kjc2z"] Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.366260 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr"] Feb 16 11:09:11 crc kubenswrapper[4797]: W0216 11:09:11.394450 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d063f9_5090_4321_85d8_739107bcd8da.slice/crio-45ac3e1b63c21b6c2b7fc2d00d0f270abc3226f9de831a48ae8e0a1806916093 WatchSource:0}: Error finding container 45ac3e1b63c21b6c2b7fc2d00d0f270abc3226f9de831a48ae8e0a1806916093: Status 404 returned error can't find the container with id 45ac3e1b63c21b6c2b7fc2d00d0f270abc3226f9de831a48ae8e0a1806916093 Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.421991 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:11 crc kubenswrapper[4797]: E0216 11:09:11.422306 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:11.92229412 +0000 UTC m=+146.642479100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:11 crc kubenswrapper[4797]: W0216 11:09:11.514683 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4602521d_4d8a_4753_b873_13a315c7ae18.slice/crio-8379b0264fa2a129920bfc2ba577d8a37498561da980daf98f67d128145196a2 WatchSource:0}: Error finding container 8379b0264fa2a129920bfc2ba577d8a37498561da980daf98f67d128145196a2: Status 404 returned error can't find the container with id 8379b0264fa2a129920bfc2ba577d8a37498561da980daf98f67d128145196a2 Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.524216 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:11 crc kubenswrapper[4797]: E0216 11:09:11.524531 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:12.024515302 +0000 UTC m=+146.744700272 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.540688 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hmhhf" podStartSLOduration=123.540667111 podStartE2EDuration="2m3.540667111s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:11.538015759 +0000 UTC m=+146.258200739" watchObservedRunningTime="2026-02-16 11:09:11.540667111 +0000 UTC m=+146.260852081" Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.606121 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wnkfx" podStartSLOduration=123.606094605 podStartE2EDuration="2m3.606094605s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:11.605815758 +0000 UTC m=+146.326000728" watchObservedRunningTime="2026-02-16 11:09:11.606094605 +0000 UTC m=+146.326279585" Feb 16 11:09:11 crc kubenswrapper[4797]: W0216 11:09:11.618628 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10a30ad2_b78d_4fa3_8f50_9bb0861f88ec.slice/crio-281e0f37b1c2164f18a777a3c9c20354819b2c53f51c90da7759f80573a72102 WatchSource:0}: Error finding container 281e0f37b1c2164f18a777a3c9c20354819b2c53f51c90da7759f80573a72102: Status 404 returned error can't find the container with id 281e0f37b1c2164f18a777a3c9c20354819b2c53f51c90da7759f80573a72102 Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.625530 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:11 crc kubenswrapper[4797]: E0216 11:09:11.625998 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:12.125976105 +0000 UTC m=+146.846161255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.626271 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" podStartSLOduration=123.626250823 podStartE2EDuration="2m3.626250823s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:11.624887595 +0000 UTC m=+146.345072565" watchObservedRunningTime="2026-02-16 11:09:11.626250823 +0000 UTC m=+146.346435803" Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.696810 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nqzsz" podStartSLOduration=123.696781125 podStartE2EDuration="2m3.696781125s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:11.654967791 +0000 UTC m=+146.375152791" watchObservedRunningTime="2026-02-16 11:09:11.696781125 +0000 UTC m=+146.416966105" Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.697349 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-dxttg" podStartSLOduration=123.697346031 podStartE2EDuration="2m3.697346031s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:11.694052661 +0000 UTC m=+146.414237641" watchObservedRunningTime="2026-02-16 11:09:11.697346031 +0000 UTC m=+146.417531001" Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.704085 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.704166 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.726335 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:11 crc kubenswrapper[4797]: E0216 11:09:11.727772 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:12.227737325 +0000 UTC m=+146.947922315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.829320 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:11 crc kubenswrapper[4797]: E0216 11:09:11.830159 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:12.330138062 +0000 UTC m=+147.050323042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.900048 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.905389 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:11 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:11 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:11 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.905434 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:11 crc kubenswrapper[4797]: I0216 11:09:11.931003 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:11 crc kubenswrapper[4797]: E0216 11:09:11.931377 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:12.431362468 +0000 UTC m=+147.151547448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.036131 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.036610 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:12.536589132 +0000 UTC m=+147.256774112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.062058 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 11:04:11 +0000 UTC, rotation deadline is 2026-12-24 05:37:03.364794032 +0000 UTC Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.062104 4797 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7458h27m51.302692338s for next certificate rotation Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.090241 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" event={"ID":"3e6d740e-c662-41a2-a815-0143fe9e7785","Type":"ContainerStarted","Data":"6e1aa2d7d48198b52a5f2a0b7d75a9f73ffe3d4fd89d079bcc1b0f9a4f307bed"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.090283 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" event={"ID":"3e6d740e-c662-41a2-a815-0143fe9e7785","Type":"ContainerStarted","Data":"82e0bef29542e3677073078c90eee12454050124a02448ebb823a35e98850309"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.104069 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" event={"ID":"3ef2f4d2-8723-4555-a4a4-eda869af0507","Type":"ContainerStarted","Data":"8da2a5e7847fa04bd684063ffbc19a861d597f54e76be095705a0686b702cfc1"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.104123 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" event={"ID":"3ef2f4d2-8723-4555-a4a4-eda869af0507","Type":"ContainerStarted","Data":"7f662b245f0bcbf28c9058e8619a6370666a5237e4a058c1537ec87eb8b58b59"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.131881 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-kkhxz" podStartSLOduration=124.131865086 podStartE2EDuration="2m4.131865086s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.131734182 +0000 UTC m=+146.851919172" watchObservedRunningTime="2026-02-16 11:09:12.131865086 +0000 UTC m=+146.852050066" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.135558 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" event={"ID":"98432c03-3d6e-436b-a2de-5467c1e5f33b","Type":"ContainerStarted","Data":"21493eb4424b4dfe0f90c1c8860945669fa17e4ed84e4e72223c72d77b9fb10d"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.135627 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" event={"ID":"98432c03-3d6e-436b-a2de-5467c1e5f33b","Type":"ContainerStarted","Data":"bd2517727f7cbe2070ee6b99a1eeb5026bd10114464544a60effaadce0a02a49"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.135846 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.137042 4797 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dnhj4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" start-of-body= Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.137087 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" podUID="98432c03-3d6e-436b-a2de-5467c1e5f33b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": dial tcp 10.217.0.40:5443: connect: connection refused" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.141352 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.141450 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:12.641430576 +0000 UTC m=+147.361615546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.141786 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.141980 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4d5np" event={"ID":"61891ace-57b4-446d-afb5-cec9848da89a","Type":"ContainerStarted","Data":"4c3895fc2bb657dcb402912473d4dede9d7ac3c0128601a9720958b207ff6644"} Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.144650 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:12.644629622 +0000 UTC m=+147.364814602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.146026 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-dxttg" event={"ID":"8f0f2562-5ca4-414e-b8a4-d7ab61e9bc96","Type":"ContainerStarted","Data":"440066455cfecf58a80f3b79804b912f18accd31243084862d6fab8347652c48"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.146550 4797 patch_prober.go:28] interesting pod/downloads-7954f5f757-dxttg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.146608 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dxttg" podUID="8f0f2562-5ca4-414e-b8a4-d7ab61e9bc96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.178144 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" event={"ID":"228b4e9e-a51f-4fce-af91-4af93c9f3aa6","Type":"ContainerStarted","Data":"e1454d8d8013d40554af94216634591ca4574bbaecadccede67ba9a4014e5a6f"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.187529 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" event={"ID":"840d6bcf-e97f-4804-9ed8-164475f990eb","Type":"ContainerStarted","Data":"b45b0c3b6b500fd65b617be19aaaf8fd0caa2f4d0ff1038ac6acb0377fab83bf"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.190463 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" event={"ID":"10a30ad2-b78d-4fa3-8f50-9bb0861f88ec","Type":"ContainerStarted","Data":"281e0f37b1c2164f18a777a3c9c20354819b2c53f51c90da7759f80573a72102"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.211718 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4d5np" podStartSLOduration=124.211700612 podStartE2EDuration="2m4.211700612s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.210288803 +0000 UTC m=+146.930473783" watchObservedRunningTime="2026-02-16 11:09:12.211700612 +0000 UTC m=+146.931885592" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.215310 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" podStartSLOduration=124.215294089 podStartE2EDuration="2m4.215294089s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.173747152 +0000 UTC m=+146.893932132" watchObservedRunningTime="2026-02-16 11:09:12.215294089 +0000 UTC m=+146.935479069" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.227445 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z" event={"ID":"e80deaa4-4f1c-4a94-9bac-cd4244a7d369","Type":"ContainerStarted","Data":"d24a088af512505f71f041cf45921a580daa39a2acbfaa26c0778f218549631d"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.238612 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" event={"ID":"1d8483dc-9868-4194-9feb-488816a99fbe","Type":"ContainerStarted","Data":"b87975c987691c7d1fc34cc263a6b9a0574ca3bac61df34d6d118034491e3336"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.248595 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z" podStartSLOduration=124.248533881 podStartE2EDuration="2m4.248533881s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.246845844 +0000 UTC m=+146.967030844" watchObservedRunningTime="2026-02-16 11:09:12.248533881 +0000 UTC m=+146.968718861" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.249796 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.249895 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:12.749882297 +0000 UTC m=+147.470067277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.252653 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.253495 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:12.753483555 +0000 UTC m=+147.473668535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.274654 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wjqbl" event={"ID":"7d483c84-5b4f-4e05-aca6-526ff414a70c","Type":"ContainerStarted","Data":"9d2094a8f7c48cf55bcb13056aa323b1891d4194c6f71d94bc6fa82a2918904c"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.274712 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wjqbl" event={"ID":"7d483c84-5b4f-4e05-aca6-526ff414a70c","Type":"ContainerStarted","Data":"cfb89dfa1c1befb1a149cce65d659944b967c658bb8386f0ee84b55bc02ae8cd"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.316149 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xrcnq" event={"ID":"a8d063f9-5090-4321-85d8-739107bcd8da","Type":"ContainerStarted","Data":"45ac3e1b63c21b6c2b7fc2d00d0f270abc3226f9de831a48ae8e0a1806916093"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.318969 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" event={"ID":"7f673c7b-0916-4829-9630-1f927c932254","Type":"ContainerStarted","Data":"75c385b2d74966ff18b75888040a027b9520f64a598aefa5a2a0adc91561ec76"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.326758 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" event={"ID":"47d91d12-f724-453e-b5af-c0cb44777ef4","Type":"ContainerStarted","Data":"b57e403f220f1945cc2e8b1ea7af05e6a51453bdf1cddec7d6f12a60b8dd4183"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.328709 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" event={"ID":"0a51263c-39fa-4c6f-9f1c-6b31707a67a8","Type":"ContainerStarted","Data":"d9b3cdd7db84bc5afd820f36440017a97024cc928cc21041b36a3e04961707c9"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.329632 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" event={"ID":"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85","Type":"ContainerStarted","Data":"f727523024d181781115ac8dafabcc8e3d9125af465e9aea07e09ba62ae969f6"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.331593 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" event={"ID":"71883ef1-52ce-4531-8997-33fd0589cccf","Type":"ContainerStarted","Data":"c3b2e98bda193999bb4dd49abea3c359f3a9f4489d0d1e47a020751f14b5f8be"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.335398 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" event={"ID":"a4a59a1f-9299-46dc-b904-3ec59cd68194","Type":"ContainerStarted","Data":"dde6eabc0571ad92e53c151819b1336de2cade6342d560387728cba496bc10eb"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.346998 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" podStartSLOduration=124.346981471 podStartE2EDuration="2m4.346981471s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.34695995 +0000 UTC m=+147.067144930" watchObservedRunningTime="2026-02-16 11:09:12.346981471 +0000 UTC m=+147.067166451" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.353519 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.353727 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:12.853704813 +0000 UTC m=+147.573889813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.353799 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.354921 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:12.854908546 +0000 UTC m=+147.575093526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.376000 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-72hd2" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.377964 4797 generic.go:334] "Generic (PLEG): container finished" podID="3699cc64-5615-4ce7-890a-d8fbed713b4c" containerID="dc308e42ae9be00b16b4e3d2bff24abb3b7f5f6fa437a53c735e12a1e9256f5c" exitCode=0 Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.378033 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" event={"ID":"3699cc64-5615-4ce7-890a-d8fbed713b4c","Type":"ContainerDied","Data":"dc308e42ae9be00b16b4e3d2bff24abb3b7f5f6fa437a53c735e12a1e9256f5c"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.381628 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" event={"ID":"d2f2e6ac-38ac-41dd-b195-7fe50447270e","Type":"ContainerStarted","Data":"1fbdec4ac86e2a78dc3a02f6fb32efe5485171841bddf970c4c44399dfc4a352"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.382011 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.384453 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" event={"ID":"261bff34-cd36-4214-880f-231fa0f1679b","Type":"ContainerStarted","Data":"4c61fdd341d5a8045649aa05b11de56ed20bc90f600f6eb46e400aa54b43f464"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.390844 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" event={"ID":"38cf7724-9e22-4b65-9362-4e712828808d","Type":"ContainerStarted","Data":"d93316f9a8a4851a029414f3bc41a71a2cd8d0488a0ee976e74a47fd03309825"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.406037 4797 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hnvtz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.406098 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" podUID="d2f2e6ac-38ac-41dd-b195-7fe50447270e" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.421239 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" event={"ID":"3d61b69d-a67d-4c60-9691-ccc3b8f24608","Type":"ContainerStarted","Data":"b7b14d65a71bcec32fa6df83c1f3a37fb5748d92cb064f63885c2c1e94d1785d"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.421347 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" event={"ID":"3d61b69d-a67d-4c60-9691-ccc3b8f24608","Type":"ContainerStarted","Data":"dd63a9d687354f61aaacef2dcad3f992f3b217cd91bf41bf2b727d6760214542"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.422363 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.423277 4797 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-h68np container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.423336 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" podUID="3d61b69d-a67d-4c60-9691-ccc3b8f24608" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.444755 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nj877" event={"ID":"ad05eae6-52a0-4044-a080-06cb3ebc5a04","Type":"ContainerStarted","Data":"b03b96399e563f84cb7a4b86f487c13d5c7654b2940bdc783ef16e5fd29a25a2"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.450901 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.454845 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.455160 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:12.955133214 +0000 UTC m=+147.675318194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.462707 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" podStartSLOduration=124.462688689 podStartE2EDuration="2m4.462688689s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.460243632 +0000 UTC m=+147.180428622" watchObservedRunningTime="2026-02-16 11:09:12.462688689 +0000 UTC m=+147.182873669" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.476956 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" event={"ID":"45b58dea-daa7-4b11-b6b9-c5a9471f1129","Type":"ContainerStarted","Data":"e6dfa23b3c0362cd4046157b02815ad8fbc94cf470268080ef3b1ef4885aac10"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.477947 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.495961 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" podStartSLOduration=124.495938081 podStartE2EDuration="2m4.495938081s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.493021741 +0000 UTC m=+147.213206731" watchObservedRunningTime="2026-02-16 11:09:12.495938081 +0000 UTC m=+147.216123061" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.521342 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-nj877" podStartSLOduration=124.521319749 podStartE2EDuration="2m4.521319749s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.519956292 +0000 UTC m=+147.240141272" watchObservedRunningTime="2026-02-16 11:09:12.521319749 +0000 UTC m=+147.241504729" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.532614 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" event={"ID":"14d96431-59d9-4550-a933-e94472bd3295","Type":"ContainerStarted","Data":"6592009477ec945b930ceae165041afbeadbc25de6a983898bda55edaaf2d5a4"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.554386 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" podStartSLOduration=124.554366075 podStartE2EDuration="2m4.554366075s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.551148828 +0000 UTC m=+147.271333828" watchObservedRunningTime="2026-02-16 11:09:12.554366075 +0000 UTC m=+147.274551085" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.556736 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.558299 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:13.058285021 +0000 UTC m=+147.778470001 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.575415 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" event={"ID":"eb92edf8-d734-4849-9dc1-26e8a68ce802","Type":"ContainerStarted","Data":"3d3dbe6cd1f21f9f1427fb2f65069081c223f78d222efd121519963b9655a572"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.575454 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" event={"ID":"eb92edf8-d734-4849-9dc1-26e8a68ce802","Type":"ContainerStarted","Data":"ea51bbbd7cf5586de5df2849c53daf60537a40f68567f2d3ec1a227179b6fb05"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.582009 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" event={"ID":"f7df1020-cfec-446c-8cee-66f3ed9a7f79","Type":"ContainerStarted","Data":"c8dba5fe2e80215da760a93299e67afb2e5904f54368a9ab83e68e6ef6a896f4"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.582051 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" event={"ID":"f7df1020-cfec-446c-8cee-66f3ed9a7f79","Type":"ContainerStarted","Data":"69e77c60e94359808a897ef5fb1bd71ac4356d22fe6f78d6d63dfca1aec3992e"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.601055 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" podStartSLOduration=124.601039022 podStartE2EDuration="2m4.601039022s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.570471482 +0000 UTC m=+147.290656462" watchObservedRunningTime="2026-02-16 11:09:12.601039022 +0000 UTC m=+147.321224002" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.605352 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-5hhbn" podStartSLOduration=124.605333718 podStartE2EDuration="2m4.605333718s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.59914876 +0000 UTC m=+147.319333740" watchObservedRunningTime="2026-02-16 11:09:12.605333718 +0000 UTC m=+147.325518698" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.621266 4797 generic.go:334] "Generic (PLEG): container finished" podID="0bf776c0-392b-4a88-86df-a31fc1538e5f" containerID="edf9e44127d86a7c183fbfbd901029473ef86be83ddd1f634dc3e13dffb1417d" exitCode=0 Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.621347 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" event={"ID":"0bf776c0-392b-4a88-86df-a31fc1538e5f","Type":"ContainerDied","Data":"edf9e44127d86a7c183fbfbd901029473ef86be83ddd1f634dc3e13dffb1417d"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.645826 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" event={"ID":"ba79c217-436f-4765-897e-95e388aed4b4","Type":"ContainerStarted","Data":"301d0043488494a87222d8d9252f671f8f6265b1e97e57cfc22464cf585f49e1"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.661001 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.663034 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:13.163006262 +0000 UTC m=+147.883191242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.670005 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.672960 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:13.172941781 +0000 UTC m=+147.893126761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.675281 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8wzvf" podStartSLOduration=124.675257974 podStartE2EDuration="2m4.675257974s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.630672325 +0000 UTC m=+147.350857305" watchObservedRunningTime="2026-02-16 11:09:12.675257974 +0000 UTC m=+147.395442954" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.708110 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-p4ktc" podStartSLOduration=124.708094145 podStartE2EDuration="2m4.708094145s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.707070777 +0000 UTC m=+147.427255757" watchObservedRunningTime="2026-02-16 11:09:12.708094145 +0000 UTC m=+147.428279125" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.720275 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4v5ch" event={"ID":"e60d9bf0-73bb-4eb5-ab0e-cce684085087","Type":"ContainerStarted","Data":"b7798a3f96e7ccea9c48094711bc2f7d29a104b371f501f26d2d1143d5b5e249"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.720313 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4v5ch" event={"ID":"e60d9bf0-73bb-4eb5-ab0e-cce684085087","Type":"ContainerStarted","Data":"a0429b9d30962655419470d4e429671348c92b6ca0c66c700195681e5670ee2c"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.735639 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hdft4" event={"ID":"7d51c375-5f0e-49cd-86ff-f26eda853733","Type":"ContainerStarted","Data":"90d36823baf92f4d8d8ea06ad07f9e16746d06d0b02c49911663a58d8ec4ec00"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.745988 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8xchc" event={"ID":"4602521d-4d8a-4753-b873-13a315c7ae18","Type":"ContainerStarted","Data":"8379b0264fa2a129920bfc2ba577d8a37498561da980daf98f67d128145196a2"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.750524 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4v5ch" podStartSLOduration=124.750507925 podStartE2EDuration="2m4.750507925s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.745770926 +0000 UTC m=+147.465955906" watchObservedRunningTime="2026-02-16 11:09:12.750507925 +0000 UTC m=+147.470692905" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.759218 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" event={"ID":"cb64bae9-2b5d-4ad4-b184-36f36908713a","Type":"ContainerStarted","Data":"af466e5b9242d3d9bc04de99ebb4a27c0ca39008d50aabd80c162df39c365939"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.760100 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.766764 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" event={"ID":"0b68c52a-173f-4415-9941-1f433247ee6f","Type":"ContainerStarted","Data":"15412ad63654a1d4f90995af72bc9f0598591ecef938767c4ae5ec83d3dffecd"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.766805 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" event={"ID":"0b68c52a-173f-4415-9941-1f433247ee6f","Type":"ContainerStarted","Data":"8ea8469746c0501e9df9fd1cc33a5a529f30fd97eeb9903053a733ae503f4a08"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.770079 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-hdft4" podStartSLOduration=124.770062486 podStartE2EDuration="2m4.770062486s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.767668781 +0000 UTC m=+147.487853761" watchObservedRunningTime="2026-02-16 11:09:12.770062486 +0000 UTC m=+147.490247466" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.772086 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.773313 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:13.273297613 +0000 UTC m=+147.993482593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.783785 4797 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5rgnb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.783826 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" podUID="cb64bae9-2b5d-4ad4-b184-36f36908713a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.784038 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46" event={"ID":"8ccc8e92-b072-4c98-ba60-8cfbaeef1776","Type":"ContainerStarted","Data":"25a50d3ed79135ec15ec17b549cc364c2d3d63bbb87eb4d8b3b1cd579e611559"} Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.796474 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" podStartSLOduration=124.796458381 podStartE2EDuration="2m4.796458381s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.792948086 +0000 UTC m=+147.513133066" watchObservedRunningTime="2026-02-16 11:09:12.796458381 +0000 UTC m=+147.516643371" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.802758 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nqzsz" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.812638 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-qkrj2" podStartSLOduration=124.812552528 podStartE2EDuration="2m4.812552528s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.811199132 +0000 UTC m=+147.531384112" watchObservedRunningTime="2026-02-16 11:09:12.812552528 +0000 UTC m=+147.532737508" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.859059 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-4dv7z" podStartSLOduration=6.859041039 podStartE2EDuration="6.859041039s" podCreationTimestamp="2026-02-16 11:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.830362581 +0000 UTC m=+147.550547561" watchObservedRunningTime="2026-02-16 11:09:12.859041039 +0000 UTC m=+147.579226029" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.873606 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.878278 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:13.37823435 +0000 UTC m=+148.098419330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.903156 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:12 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:12 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:12 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.903219 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:12 crc kubenswrapper[4797]: I0216 11:09:12.978256 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:12 crc kubenswrapper[4797]: E0216 11:09:12.978465 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:13.478439447 +0000 UTC m=+148.198624427 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.037700 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.061080 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46" podStartSLOduration=125.061050248 podStartE2EDuration="2m5.061050248s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:12.892803365 +0000 UTC m=+147.612988345" watchObservedRunningTime="2026-02-16 11:09:13.061050248 +0000 UTC m=+147.781235228" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.080631 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.081000 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:13.580988239 +0000 UTC m=+148.301173219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.181766 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.182179 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:13.682164673 +0000 UTC m=+148.402349653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.282817 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.283277 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:13.783248205 +0000 UTC m=+148.503433245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.384133 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.384371 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:13.884297895 +0000 UTC m=+148.604482875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.384594 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.384948 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:13.884935032 +0000 UTC m=+148.605120012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.410410 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.486619 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:13.986558059 +0000 UTC m=+148.706743039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.486672 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.487277 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.487831 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:13.987815403 +0000 UTC m=+148.708000383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.588930 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.589116 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.08908871 +0000 UTC m=+148.809273690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.589312 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.589674 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.089650926 +0000 UTC m=+148.809835906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.690189 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.690364 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.190335696 +0000 UTC m=+148.910520676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.690424 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.690732 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.190725577 +0000 UTC m=+148.910910557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.790993 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.791132 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.291111449 +0000 UTC m=+149.011296429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.791221 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.791664 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.291653605 +0000 UTC m=+149.011838585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.792289 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wjqbl" event={"ID":"7d483c84-5b4f-4e05-aca6-526ff414a70c","Type":"ContainerStarted","Data":"ed16528d764cfd646dc7ed383ab5b0b9548b0b90f59d7500c8831eeb1a6cab6c"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.794643 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" event={"ID":"a4a59a1f-9299-46dc-b904-3ec59cd68194","Type":"ContainerStarted","Data":"b6539202173985f8f53a78449580b4cd7fb99cf19b3d2cb8a506b4e38a486adb"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.794696 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" event={"ID":"a4a59a1f-9299-46dc-b904-3ec59cd68194","Type":"ContainerStarted","Data":"34b11456fa331ef5530930d533303bc58acbea95742da7c857d349579fd3f6ea"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.797653 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" event={"ID":"3699cc64-5615-4ce7-890a-d8fbed713b4c","Type":"ContainerStarted","Data":"2f2c145b86af21d1f495a320ed57d3c4c4d9408c175e5dd8ee621a817c39846f"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.797678 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" event={"ID":"3699cc64-5615-4ce7-890a-d8fbed713b4c","Type":"ContainerStarted","Data":"b7fcc76e95d6da08ca01e7efbf1ff3f333fc9ce20ec0dd3a887d17444395032e"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.799698 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xrcnq" event={"ID":"a8d063f9-5090-4321-85d8-739107bcd8da","Type":"ContainerStarted","Data":"3e0aed73ecd8a80e18996693754f3cb06365bfcf149ce7d08cf8967e88f4843b"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.801894 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" event={"ID":"0bf776c0-392b-4a88-86df-a31fc1538e5f","Type":"ContainerStarted","Data":"acd6575905cdc27eecafb48b7a9b76fbcb1b76c8b09e7626efc0d914465605f4"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.803204 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-rgs6z" event={"ID":"e80deaa4-4f1c-4a94-9bac-cd4244a7d369","Type":"ContainerStarted","Data":"ec4ca3b3330ddbfc7490a2cd15988c6b0ec00401ac4c17840891640cb2672e0f"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.805706 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" event={"ID":"1d8483dc-9868-4194-9feb-488816a99fbe","Type":"ContainerStarted","Data":"1fe84a05d7235ae11067503028f8ef3fec38882e02c216ca493fc60c4213da5f"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.808167 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-28zck" event={"ID":"47d91d12-f724-453e-b5af-c0cb44777ef4","Type":"ContainerStarted","Data":"630085f64065fbbe561623364d6b97ed7cf8f6b0c0ef9a992f92f342f1c791df"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.809925 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" event={"ID":"228b4e9e-a51f-4fce-af91-4af93c9f3aa6","Type":"ContainerStarted","Data":"7186fe420ceefc6524f4ba1aac4cbc92c4e12ccca9f4f61c9d6892ba3a175ec1"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.812598 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8xchc" event={"ID":"4602521d-4d8a-4753-b873-13a315c7ae18","Type":"ContainerStarted","Data":"283f711c242b8ee5900d582bf4493acf542c56ef3e3312c9aa2fd4e8ece56779"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.812637 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8xchc" event={"ID":"4602521d-4d8a-4753-b873-13a315c7ae18","Type":"ContainerStarted","Data":"a26d8be136ece83517d331db3738d43e7c4ecc2d2e0b55d93224e989eaa3172d"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.812707 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-8xchc" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.816961 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wjqbl" podStartSLOduration=125.816947791 podStartE2EDuration="2m5.816947791s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:13.815622074 +0000 UTC m=+148.535807054" watchObservedRunningTime="2026-02-16 11:09:13.816947791 +0000 UTC m=+148.537132771" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.817148 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" event={"ID":"10a30ad2-b78d-4fa3-8f50-9bb0861f88ec","Type":"ContainerStarted","Data":"db34c62db20b9ccc8d9866ed6e13a6c6df54f2dfd8746e88c1a07c0e460b9867"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.822573 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46" event={"ID":"8ccc8e92-b072-4c98-ba60-8cfbaeef1776","Type":"ContainerStarted","Data":"be9d698246825142c42528130391ee0fc494dc3cec72324e160d7c860d93462b"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.822646 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6zm46" event={"ID":"8ccc8e92-b072-4c98-ba60-8cfbaeef1776","Type":"ContainerStarted","Data":"07dbf7d5878853b1b91d1041cb8a4f5d326d4b7f4f52d7e115935b3e144be61e"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.825987 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" event={"ID":"7f673c7b-0916-4829-9630-1f927c932254","Type":"ContainerStarted","Data":"b939fafe32187235946cb441cc4979e645f2bba16ba046dd48c4fe719806a1d3"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.828560 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ztv28" event={"ID":"14d96431-59d9-4550-a933-e94472bd3295","Type":"ContainerStarted","Data":"630b825610f85a53d66f7c8f220be96c5c93575b54a46e7fa051e983f8bab310"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.831232 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" event={"ID":"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85","Type":"ContainerStarted","Data":"24c20b970669b1cec935c0af9871aa3992f8300792b6dcbb5cd3ca779d14eebb"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.831295 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" event={"ID":"3dc8fed9-2dc2-46e5-8f2c-7c2d26061a85","Type":"ContainerStarted","Data":"525b305f31712e868fd532817b0a5d333ce7a32414e01eb188c9bb728dc5e64f"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.833611 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" event={"ID":"840d6bcf-e97f-4804-9ed8-164475f990eb","Type":"ContainerStarted","Data":"9d7b3942d0efb30556f53a7cf588306f9651bc3a6bd9061324328b6ad48954e1"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.836292 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" event={"ID":"38cf7724-9e22-4b65-9362-4e712828808d","Type":"ContainerStarted","Data":"ae84279dbee00a6658b9ed57fc2e0baf727c84a0a6953ebca71b1609a35acded"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.836338 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" event={"ID":"38cf7724-9e22-4b65-9362-4e712828808d","Type":"ContainerStarted","Data":"a52c7385dc2a89c1dae8867f39728fa24cfe8d52902bdd8bb574c35395d228f6"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.836460 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.837928 4797 generic.go:334] "Generic (PLEG): container finished" podID="3e6d740e-c662-41a2-a815-0143fe9e7785" containerID="6e1aa2d7d48198b52a5f2a0b7d75a9f73ffe3d4fd89d079bcc1b0f9a4f307bed" exitCode=0 Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.838004 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" event={"ID":"3e6d740e-c662-41a2-a815-0143fe9e7785","Type":"ContainerDied","Data":"6e1aa2d7d48198b52a5f2a0b7d75a9f73ffe3d4fd89d079bcc1b0f9a4f307bed"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.842003 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" event={"ID":"0a51263c-39fa-4c6f-9f1c-6b31707a67a8","Type":"ContainerStarted","Data":"8893c503ed15da4e5d9dd2d2285a207f851226b640d5f34ab9f68059d13c621c"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.842038 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" event={"ID":"0a51263c-39fa-4c6f-9f1c-6b31707a67a8","Type":"ContainerStarted","Data":"dae386d02104fbaa24ab90141afa12e7162188613ed4e6cfb8714b16e890076c"} Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.844860 4797 patch_prober.go:28] interesting pod/downloads-7954f5f757-dxttg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.844907 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dxttg" podUID="8f0f2562-5ca4-414e-b8a4-d7ab61e9bc96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.845165 4797 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5rgnb container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" start-of-body= Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.845212 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" podUID="cb64bae9-2b5d-4ad4-b184-36f36908713a" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.37:8080/healthz\": dial tcp 10.217.0.37:8080: connect: connection refused" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.848875 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.856129 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-h68np" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.883365 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-qtxzc" podStartSLOduration=125.883350371 podStartE2EDuration="2m5.883350371s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:13.880831003 +0000 UTC m=+148.601015983" watchObservedRunningTime="2026-02-16 11:09:13.883350371 +0000 UTC m=+148.603535351" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.886713 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jndqp"] Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.887569 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.898064 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:13 crc kubenswrapper[4797]: E0216 11:09:13.904326 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.40430309 +0000 UTC m=+149.124488060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.905961 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:13 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:13 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:13 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.906011 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.924016 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.927269 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" podStartSLOduration=125.927252372 podStartE2EDuration="2m5.927252372s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:13.924521238 +0000 UTC m=+148.644706218" watchObservedRunningTime="2026-02-16 11:09:13.927252372 +0000 UTC m=+148.647437352" Feb 16 11:09:13 crc kubenswrapper[4797]: I0216 11:09:13.947551 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jndqp"] Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.001381 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:14 crc kubenswrapper[4797]: E0216 11:09:14.002074 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.502053541 +0000 UTC m=+149.222238521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.002697 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8xchc" podStartSLOduration=8.002679298 podStartE2EDuration="8.002679298s" podCreationTimestamp="2026-02-16 11:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:13.963157886 +0000 UTC m=+148.683342856" watchObservedRunningTime="2026-02-16 11:09:14.002679298 +0000 UTC m=+148.722864298" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.048162 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" podStartSLOduration=126.048143311 podStartE2EDuration="2m6.048143311s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:14.046176197 +0000 UTC m=+148.766361177" watchObservedRunningTime="2026-02-16 11:09:14.048143311 +0000 UTC m=+148.768328291" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.074399 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xrcnq" podStartSLOduration=8.074365973 podStartE2EDuration="8.074365973s" podCreationTimestamp="2026-02-16 11:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:14.073723774 +0000 UTC m=+148.793908754" watchObservedRunningTime="2026-02-16 11:09:14.074365973 +0000 UTC m=+148.794550953" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.134890 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.135114 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8x2x\" (UniqueName: \"kubernetes.io/projected/15fec828-6337-4c27-93ca-4b022a74486e-kube-api-access-r8x2x\") pod \"certified-operators-jndqp\" (UID: \"15fec828-6337-4c27-93ca-4b022a74486e\") " pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.135160 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fec828-6337-4c27-93ca-4b022a74486e-utilities\") pod \"certified-operators-jndqp\" (UID: \"15fec828-6337-4c27-93ca-4b022a74486e\") " pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.135293 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fec828-6337-4c27-93ca-4b022a74486e-catalog-content\") pod \"certified-operators-jndqp\" (UID: \"15fec828-6337-4c27-93ca-4b022a74486e\") " pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:09:14 crc kubenswrapper[4797]: E0216 11:09:14.135426 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.635409048 +0000 UTC m=+149.355594038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.143020 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-shcfr"] Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.146225 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-p2tmz" podStartSLOduration=126.14620439 podStartE2EDuration="2m6.14620439s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:14.143817256 +0000 UTC m=+148.864002236" watchObservedRunningTime="2026-02-16 11:09:14.14620439 +0000 UTC m=+148.866389370" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.158315 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shcfr"] Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.164662 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.172526 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.223497 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-tg9bq" podStartSLOduration=126.223473107 podStartE2EDuration="2m6.223473107s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:14.218788269 +0000 UTC m=+148.938973249" watchObservedRunningTime="2026-02-16 11:09:14.223473107 +0000 UTC m=+148.943658087" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.243238 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fec828-6337-4c27-93ca-4b022a74486e-catalog-content\") pod \"certified-operators-jndqp\" (UID: \"15fec828-6337-4c27-93ca-4b022a74486e\") " pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.243307 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8x2x\" (UniqueName: \"kubernetes.io/projected/15fec828-6337-4c27-93ca-4b022a74486e-kube-api-access-r8x2x\") pod \"certified-operators-jndqp\" (UID: \"15fec828-6337-4c27-93ca-4b022a74486e\") " pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.243330 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fec828-6337-4c27-93ca-4b022a74486e-utilities\") pod \"certified-operators-jndqp\" (UID: \"15fec828-6337-4c27-93ca-4b022a74486e\") " pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.243369 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:14 crc kubenswrapper[4797]: E0216 11:09:14.243662 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.743647714 +0000 UTC m=+149.463832694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.244172 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fec828-6337-4c27-93ca-4b022a74486e-utilities\") pod \"certified-operators-jndqp\" (UID: \"15fec828-6337-4c27-93ca-4b022a74486e\") " pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.244389 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fec828-6337-4c27-93ca-4b022a74486e-catalog-content\") pod \"certified-operators-jndqp\" (UID: \"15fec828-6337-4c27-93ca-4b022a74486e\") " pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.288018 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kjf4q"] Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.289448 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.301116 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.301154 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.304093 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjf4q"] Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.323689 4797 patch_prober.go:28] interesting pod/apiserver-76f77b778f-mxqz2 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.323989 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" podUID="3699cc64-5615-4ce7-890a-d8fbed713b4c" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.328451 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8x2x\" (UniqueName: \"kubernetes.io/projected/15fec828-6337-4c27-93ca-4b022a74486e-kube-api-access-r8x2x\") pod \"certified-operators-jndqp\" (UID: \"15fec828-6337-4c27-93ca-4b022a74486e\") " pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.346424 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.346649 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf1530b-a55a-41f5-bffc-b2094a0e8746-catalog-content\") pod \"community-operators-shcfr\" (UID: \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\") " pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.346689 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf1530b-a55a-41f5-bffc-b2094a0e8746-utilities\") pod \"community-operators-shcfr\" (UID: \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\") " pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.346799 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcqpk\" (UniqueName: \"kubernetes.io/projected/3cf1530b-a55a-41f5-bffc-b2094a0e8746-kube-api-access-kcqpk\") pod \"community-operators-shcfr\" (UID: \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\") " pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:09:14 crc kubenswrapper[4797]: E0216 11:09:14.346907 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.846889524 +0000 UTC m=+149.567074504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.364610 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-chmt4" podStartSLOduration=126.364551743 podStartE2EDuration="2m6.364551743s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:14.362458856 +0000 UTC m=+149.082643836" watchObservedRunningTime="2026-02-16 11:09:14.364551743 +0000 UTC m=+149.084736723" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.377984 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.378709 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.393088 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" podStartSLOduration=126.393071226 podStartE2EDuration="2m6.393071226s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:14.390237279 +0000 UTC m=+149.110422259" watchObservedRunningTime="2026-02-16 11:09:14.393071226 +0000 UTC m=+149.113256206" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.393738 4797 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-gmmm4 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.393841 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" podUID="0bf776c0-392b-4a88-86df-a31fc1538e5f" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.431697 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-6pbbr" podStartSLOduration=126.431680474 podStartE2EDuration="2m6.431680474s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:14.429990757 +0000 UTC m=+149.150175737" watchObservedRunningTime="2026-02-16 11:09:14.431680474 +0000 UTC m=+149.151865444" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.448154 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.448420 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcqpk\" (UniqueName: \"kubernetes.io/projected/3cf1530b-a55a-41f5-bffc-b2094a0e8746-kube-api-access-kcqpk\") pod \"community-operators-shcfr\" (UID: \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\") " pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.448542 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7nj4\" (UniqueName: \"kubernetes.io/projected/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-kube-api-access-f7nj4\") pod \"certified-operators-kjf4q\" (UID: \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\") " pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.448652 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf1530b-a55a-41f5-bffc-b2094a0e8746-catalog-content\") pod \"community-operators-shcfr\" (UID: \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\") " pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.448731 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf1530b-a55a-41f5-bffc-b2094a0e8746-utilities\") pod \"community-operators-shcfr\" (UID: \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\") " pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:09:14 crc kubenswrapper[4797]: E0216 11:09:14.448757 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:14.948732606 +0000 UTC m=+149.668917586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.448903 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-catalog-content\") pod \"certified-operators-kjf4q\" (UID: \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\") " pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.449861 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf1530b-a55a-41f5-bffc-b2094a0e8746-catalog-content\") pod \"community-operators-shcfr\" (UID: \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\") " pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.449930 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-utilities\") pod \"certified-operators-kjf4q\" (UID: \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\") " pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.450055 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf1530b-a55a-41f5-bffc-b2094a0e8746-utilities\") pod \"community-operators-shcfr\" (UID: \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\") " pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.487208 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-xlhh5" podStartSLOduration=126.487189449 podStartE2EDuration="2m6.487189449s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:14.475907903 +0000 UTC m=+149.196092883" watchObservedRunningTime="2026-02-16 11:09:14.487189449 +0000 UTC m=+149.207374419" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.516132 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5nrhk"] Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.523372 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.528323 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcqpk\" (UniqueName: \"kubernetes.io/projected/3cf1530b-a55a-41f5-bffc-b2094a0e8746-kube-api-access-kcqpk\") pod \"community-operators-shcfr\" (UID: \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\") " pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.551270 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.551514 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7nj4\" (UniqueName: \"kubernetes.io/projected/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-kube-api-access-f7nj4\") pod \"certified-operators-kjf4q\" (UID: \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\") " pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.551570 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-catalog-content\") pod \"certified-operators-kjf4q\" (UID: \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\") " pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.551619 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-utilities\") pod \"certified-operators-kjf4q\" (UID: \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\") " pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.552032 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-utilities\") pod \"certified-operators-kjf4q\" (UID: \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\") " pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:14 crc kubenswrapper[4797]: E0216 11:09:14.552092 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:15.052078549 +0000 UTC m=+149.772263529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.555550 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.556680 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-catalog-content\") pod \"certified-operators-kjf4q\" (UID: \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\") " pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.566208 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nrhk"] Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.625608 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7nj4\" (UniqueName: \"kubernetes.io/projected/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-kube-api-access-f7nj4\") pod \"certified-operators-kjf4q\" (UID: \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\") " pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.639168 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rsvqr" podStartSLOduration=126.639152851 podStartE2EDuration="2m6.639152851s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:14.637101775 +0000 UTC m=+149.357286755" watchObservedRunningTime="2026-02-16 11:09:14.639152851 +0000 UTC m=+149.359337831" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.654165 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.654238 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8ffede4-813c-406a-9590-79f745ef4283-utilities\") pod \"community-operators-5nrhk\" (UID: \"d8ffede4-813c-406a-9590-79f745ef4283\") " pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.654270 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8ffede4-813c-406a-9590-79f745ef4283-catalog-content\") pod \"community-operators-5nrhk\" (UID: \"d8ffede4-813c-406a-9590-79f745ef4283\") " pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.654289 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ppl7\" (UniqueName: \"kubernetes.io/projected/d8ffede4-813c-406a-9590-79f745ef4283-kube-api-access-7ppl7\") pod \"community-operators-5nrhk\" (UID: \"d8ffede4-813c-406a-9590-79f745ef4283\") " pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:14 crc kubenswrapper[4797]: E0216 11:09:14.654618 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:15.15460525 +0000 UTC m=+149.874790230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.757270 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.757768 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8ffede4-813c-406a-9590-79f745ef4283-utilities\") pod \"community-operators-5nrhk\" (UID: \"d8ffede4-813c-406a-9590-79f745ef4283\") " pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.757817 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8ffede4-813c-406a-9590-79f745ef4283-catalog-content\") pod \"community-operators-5nrhk\" (UID: \"d8ffede4-813c-406a-9590-79f745ef4283\") " pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.757848 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ppl7\" (UniqueName: \"kubernetes.io/projected/d8ffede4-813c-406a-9590-79f745ef4283-kube-api-access-7ppl7\") pod \"community-operators-5nrhk\" (UID: \"d8ffede4-813c-406a-9590-79f745ef4283\") " pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.758420 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8ffede4-813c-406a-9590-79f745ef4283-utilities\") pod \"community-operators-5nrhk\" (UID: \"d8ffede4-813c-406a-9590-79f745ef4283\") " pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:14 crc kubenswrapper[4797]: E0216 11:09:14.758634 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:15.258612131 +0000 UTC m=+149.978797111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.758728 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8ffede4-813c-406a-9590-79f745ef4283-catalog-content\") pod \"community-operators-5nrhk\" (UID: \"d8ffede4-813c-406a-9590-79f745ef4283\") " pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.789405 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" podStartSLOduration=126.789388546 podStartE2EDuration="2m6.789388546s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:14.723858808 +0000 UTC m=+149.444043788" watchObservedRunningTime="2026-02-16 11:09:14.789388546 +0000 UTC m=+149.509573526" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.810021 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.816628 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ppl7\" (UniqueName: \"kubernetes.io/projected/d8ffede4-813c-406a-9590-79f745ef4283-kube-api-access-7ppl7\") pod \"community-operators-5nrhk\" (UID: \"d8ffede4-813c-406a-9590-79f745ef4283\") " pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.855948 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.856454 4797 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dnhj4 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.856484 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" podUID="98432c03-3d6e-436b-a2de-5467c1e5f33b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.40:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.859239 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.859302 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:14 crc kubenswrapper[4797]: E0216 11:09:14.859639 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:15.359624531 +0000 UTC m=+150.079809511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.870326 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.904242 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.924426 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:14 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:14 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:14 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.924502 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.981279 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.981589 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.981656 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.981686 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.985718 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:09:14 crc kubenswrapper[4797]: E0216 11:09:14.985804 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:15.485788463 +0000 UTC m=+150.205973443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.989047 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.990535 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.996658 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" event={"ID":"3e6d740e-c662-41a2-a815-0143fe9e7785","Type":"ContainerStarted","Data":"b23a6afcc3dbd197640e0b91226b4c9e80bffa7f56a12e55ab6c6e6b2f618dc1"} Feb 16 11:09:14 crc kubenswrapper[4797]: I0216 11:09:14.997053 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.012892 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.030806 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.032088 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" event={"ID":"261bff34-cd36-4214-880f-231fa0f1679b","Type":"ContainerStarted","Data":"f88201d723caf4198bd166005918140cd527cad4362e4812abed16b246f6234a"} Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.063469 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.064976 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.094193 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:15 crc kubenswrapper[4797]: E0216 11:09:15.123626 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:15.62360401 +0000 UTC m=+150.343788990 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.196211 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:15 crc kubenswrapper[4797]: E0216 11:09:15.197150 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:15.697134965 +0000 UTC m=+150.417319935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.198143 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:15 crc kubenswrapper[4797]: E0216 11:09:15.204876 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:15.704852964 +0000 UTC m=+150.425037954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.300177 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:15 crc kubenswrapper[4797]: E0216 11:09:15.300537 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:15.800522019 +0000 UTC m=+150.520706999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.364337 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" podStartSLOduration=127.364292269 podStartE2EDuration="2m7.364292269s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:15.183036562 +0000 UTC m=+149.903221542" watchObservedRunningTime="2026-02-16 11:09:15.364292269 +0000 UTC m=+150.084477249" Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.382142 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jndqp"] Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.402394 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:15 crc kubenswrapper[4797]: E0216 11:09:15.402762 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:15.902750352 +0000 UTC m=+150.622935332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:15 crc kubenswrapper[4797]: W0216 11:09:15.441499 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15fec828_6337_4c27_93ca_4b022a74486e.slice/crio-35ee0cda34d2eeb99d54699bfb2c3dfd245d940687aebede63adb82f5e4544f3 WatchSource:0}: Error finding container 35ee0cda34d2eeb99d54699bfb2c3dfd245d940687aebede63adb82f5e4544f3: Status 404 returned error can't find the container with id 35ee0cda34d2eeb99d54699bfb2c3dfd245d940687aebede63adb82f5e4544f3 Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.503063 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:15 crc kubenswrapper[4797]: E0216 11:09:15.503426 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:16.003410213 +0000 UTC m=+150.723595193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.604812 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:15 crc kubenswrapper[4797]: E0216 11:09:15.605805 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:16.105789229 +0000 UTC m=+150.825974209 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.723920 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:15 crc kubenswrapper[4797]: E0216 11:09:15.724362 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:16.224342415 +0000 UTC m=+150.944527395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.825373 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:15 crc kubenswrapper[4797]: E0216 11:09:15.826022 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:16.326009202 +0000 UTC m=+151.046194182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.831073 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5nrhk"] Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.899162 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p44df"] Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.901015 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.909665 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.921174 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:15 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:15 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:15 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.921280 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.927270 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:15 crc kubenswrapper[4797]: E0216 11:09:15.927548 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:16.427519726 +0000 UTC m=+151.147704706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:15 crc kubenswrapper[4797]: I0216 11:09:15.927639 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:15 crc kubenswrapper[4797]: E0216 11:09:15.928036 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:16.428025829 +0000 UTC m=+151.148210809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.012192 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p44df"] Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.026802 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-shcfr"] Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.029288 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.029494 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9dbba0-bbf1-49d8-84c2-158007da8a69-utilities\") pod \"redhat-marketplace-p44df\" (UID: \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\") " pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.029538 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzjr6\" (UniqueName: \"kubernetes.io/projected/3f9dbba0-bbf1-49d8-84c2-158007da8a69-kube-api-access-rzjr6\") pod \"redhat-marketplace-p44df\" (UID: \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\") " pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.029591 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9dbba0-bbf1-49d8-84c2-158007da8a69-catalog-content\") pod \"redhat-marketplace-p44df\" (UID: \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\") " pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:16 crc kubenswrapper[4797]: E0216 11:09:16.029710 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:16.529693327 +0000 UTC m=+151.249878307 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.082200 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nrhk" event={"ID":"d8ffede4-813c-406a-9590-79f745ef4283","Type":"ContainerStarted","Data":"43e4fac9a1efd89a688beafc2adb1b030a13fd5b3af61ffe78a814f4602dbd48"} Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.121043 4797 generic.go:334] "Generic (PLEG): container finished" podID="15fec828-6337-4c27-93ca-4b022a74486e" containerID="4d512f09d96acdd79694d7e8d5017aa33941968cc1fc0d6904f2aaaf2d8b8c22" exitCode=0 Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.122385 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndqp" event={"ID":"15fec828-6337-4c27-93ca-4b022a74486e","Type":"ContainerDied","Data":"4d512f09d96acdd79694d7e8d5017aa33941968cc1fc0d6904f2aaaf2d8b8c22"} Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.122477 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndqp" event={"ID":"15fec828-6337-4c27-93ca-4b022a74486e","Type":"ContainerStarted","Data":"35ee0cda34d2eeb99d54699bfb2c3dfd245d940687aebede63adb82f5e4544f3"} Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.131326 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.131364 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9dbba0-bbf1-49d8-84c2-158007da8a69-utilities\") pod \"redhat-marketplace-p44df\" (UID: \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\") " pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.131396 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzjr6\" (UniqueName: \"kubernetes.io/projected/3f9dbba0-bbf1-49d8-84c2-158007da8a69-kube-api-access-rzjr6\") pod \"redhat-marketplace-p44df\" (UID: \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\") " pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.131432 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9dbba0-bbf1-49d8-84c2-158007da8a69-catalog-content\") pod \"redhat-marketplace-p44df\" (UID: \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\") " pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.131801 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9dbba0-bbf1-49d8-84c2-158007da8a69-catalog-content\") pod \"redhat-marketplace-p44df\" (UID: \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\") " pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:16 crc kubenswrapper[4797]: E0216 11:09:16.132048 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:16.632034873 +0000 UTC m=+151.352219853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.132368 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9dbba0-bbf1-49d8-84c2-158007da8a69-utilities\") pod \"redhat-marketplace-p44df\" (UID: \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\") " pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.140506 4797 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.232330 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:16 crc kubenswrapper[4797]: E0216 11:09:16.237849 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:16.737810062 +0000 UTC m=+151.457995042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.262499 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzjr6\" (UniqueName: \"kubernetes.io/projected/3f9dbba0-bbf1-49d8-84c2-158007da8a69-kube-api-access-rzjr6\") pod \"redhat-marketplace-p44df\" (UID: \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\") " pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.309815 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ttw8s"] Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.331927 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.340245 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:16 crc kubenswrapper[4797]: E0216 11:09:16.340630 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:16.84061449 +0000 UTC m=+151.560799470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.383166 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttw8s"] Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.453664 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.453901 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1f1272-6892-4bc2-be85-30b6d08df6ec-catalog-content\") pod \"redhat-marketplace-ttw8s\" (UID: \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\") " pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.453934 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1f1272-6892-4bc2-be85-30b6d08df6ec-utilities\") pod \"redhat-marketplace-ttw8s\" (UID: \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\") " pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.453984 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfzt8\" (UniqueName: \"kubernetes.io/projected/4e1f1272-6892-4bc2-be85-30b6d08df6ec-kube-api-access-gfzt8\") pod \"redhat-marketplace-ttw8s\" (UID: \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\") " pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:16 crc kubenswrapper[4797]: E0216 11:09:16.454303 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:16.954284143 +0000 UTC m=+151.674469123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:16 crc kubenswrapper[4797]: W0216 11:09:16.468169 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-84f2a5a4a6b89ea75da8b0e75a11850cac398a4b582f63ed907560ffe82a9c90 WatchSource:0}: Error finding container 84f2a5a4a6b89ea75da8b0e75a11850cac398a4b582f63ed907560ffe82a9c90: Status 404 returned error can't find the container with id 84f2a5a4a6b89ea75da8b0e75a11850cac398a4b582f63ed907560ffe82a9c90 Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.511298 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kjf4q"] Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.550025 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.557952 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.557996 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1f1272-6892-4bc2-be85-30b6d08df6ec-catalog-content\") pod \"redhat-marketplace-ttw8s\" (UID: \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\") " pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.558015 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1f1272-6892-4bc2-be85-30b6d08df6ec-utilities\") pod \"redhat-marketplace-ttw8s\" (UID: \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\") " pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.558044 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfzt8\" (UniqueName: \"kubernetes.io/projected/4e1f1272-6892-4bc2-be85-30b6d08df6ec-kube-api-access-gfzt8\") pod \"redhat-marketplace-ttw8s\" (UID: \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\") " pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:16 crc kubenswrapper[4797]: E0216 11:09:16.558483 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:17.058470858 +0000 UTC m=+151.778655848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.558733 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1f1272-6892-4bc2-be85-30b6d08df6ec-catalog-content\") pod \"redhat-marketplace-ttw8s\" (UID: \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\") " pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.558751 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1f1272-6892-4bc2-be85-30b6d08df6ec-utilities\") pod \"redhat-marketplace-ttw8s\" (UID: \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\") " pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.601570 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfzt8\" (UniqueName: \"kubernetes.io/projected/4e1f1272-6892-4bc2-be85-30b6d08df6ec-kube-api-access-gfzt8\") pod \"redhat-marketplace-ttw8s\" (UID: \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\") " pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.660202 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:16 crc kubenswrapper[4797]: E0216 11:09:16.660662 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:17.16064313 +0000 UTC m=+151.880828110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:16 crc kubenswrapper[4797]: W0216 11:09:16.700745 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-3393b2de9b4d37b8c980a36ff8f108de874a9631768966e21e52a5f011827900 WatchSource:0}: Error finding container 3393b2de9b4d37b8c980a36ff8f108de874a9631768966e21e52a5f011827900: Status 404 returned error can't find the container with id 3393b2de9b4d37b8c980a36ff8f108de874a9631768966e21e52a5f011827900 Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.763346 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.764502 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:16 crc kubenswrapper[4797]: E0216 11:09:16.764863 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:17.264847746 +0000 UTC m=+151.985032726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.865291 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:16 crc kubenswrapper[4797]: E0216 11:09:16.866739 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:17.366708569 +0000 UTC m=+152.086893549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.866877 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:16 crc kubenswrapper[4797]: E0216 11:09:16.867244 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:17.367236253 +0000 UTC m=+152.087421233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:16 crc kubenswrapper[4797]: I0216 11:09:16.920935 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:16 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:16 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:16 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:16.921001 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:16.976626 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:17 crc kubenswrapper[4797]: E0216 11:09:16.977048 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:17.477031762 +0000 UTC m=+152.197216742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.078866 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:17 crc kubenswrapper[4797]: E0216 11:09:17.079198 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:17.579184002 +0000 UTC m=+152.299368982 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.171080 4797 generic.go:334] "Generic (PLEG): container finished" podID="d8ffede4-813c-406a-9590-79f745ef4283" containerID="3e22e78e5b64aee72d4cb93c29c2c602fa20da7c67533486ded8902f594cbf80" exitCode=0 Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.171760 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nrhk" event={"ID":"d8ffede4-813c-406a-9590-79f745ef4283","Type":"ContainerDied","Data":"3e22e78e5b64aee72d4cb93c29c2c602fa20da7c67533486ded8902f594cbf80"} Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.181631 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:17 crc kubenswrapper[4797]: E0216 11:09:17.183361 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 11:09:17.683345927 +0000 UTC m=+152.403530907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.251475 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" event={"ID":"261bff34-cd36-4214-880f-231fa0f1679b","Type":"ContainerStarted","Data":"55e3c0b21213af1acd035b3e3e500dbb1fabce5c14498fc9a177e3213d2dfaf5"} Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.267226 4797 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.273412 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3393b2de9b4d37b8c980a36ff8f108de874a9631768966e21e52a5f011827900"} Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.276711 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjf4q" event={"ID":"55d238b0-cdbe-48f8-afe0-4e163ad4b48a","Type":"ContainerStarted","Data":"741ca22a909ec328f5c10470e002fba5cc0cc921b62fdfd42bd8a5067c8e66e5"} Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.286339 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kfgcw"] Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.289719 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:17 crc kubenswrapper[4797]: E0216 11:09:17.290065 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 11:09:17.79003994 +0000 UTC m=+152.510224910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ckmh7" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.291686 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.291728 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"84f2a5a4a6b89ea75da8b0e75a11850cac398a4b582f63ed907560ffe82a9c90"} Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.303791 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.306426 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfgcw"] Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.316156 4797 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T11:09:17.267267993Z","Handler":null,"Name":""} Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.321635 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"85fd49b849abce553147b82f96aea40af56cf58fbbc9ae1acf93a1a0fafed241"} Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.346011 4797 generic.go:334] "Generic (PLEG): container finished" podID="3cf1530b-a55a-41f5-bffc-b2094a0e8746" containerID="a801292df4c1e2d448d95ef14ec60725d5678230e66370914177f9854936e223" exitCode=0 Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.347491 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shcfr" event={"ID":"3cf1530b-a55a-41f5-bffc-b2094a0e8746","Type":"ContainerDied","Data":"a801292df4c1e2d448d95ef14ec60725d5678230e66370914177f9854936e223"} Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.347516 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shcfr" event={"ID":"3cf1530b-a55a-41f5-bffc-b2094a0e8746","Type":"ContainerStarted","Data":"3801b2589dbefa6331383eceac6d82cd540888ccee7b93389fd1ef3c95d8498d"} Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.376678 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p44df"] Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.378440 4797 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.378467 4797 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.391735 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.403508 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.493677 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.494415 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1faf22-b706-4d66-909e-e1de1fe89b62-utilities\") pod \"redhat-operators-kfgcw\" (UID: \"ab1faf22-b706-4d66-909e-e1de1fe89b62\") " pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.494460 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn96r\" (UniqueName: \"kubernetes.io/projected/ab1faf22-b706-4d66-909e-e1de1fe89b62-kube-api-access-hn96r\") pod \"redhat-operators-kfgcw\" (UID: \"ab1faf22-b706-4d66-909e-e1de1fe89b62\") " pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.494485 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1faf22-b706-4d66-909e-e1de1fe89b62-catalog-content\") pod \"redhat-operators-kfgcw\" (UID: \"ab1faf22-b706-4d66-909e-e1de1fe89b62\") " pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.586109 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.586146 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.597161 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1faf22-b706-4d66-909e-e1de1fe89b62-utilities\") pod \"redhat-operators-kfgcw\" (UID: \"ab1faf22-b706-4d66-909e-e1de1fe89b62\") " pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.597227 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn96r\" (UniqueName: \"kubernetes.io/projected/ab1faf22-b706-4d66-909e-e1de1fe89b62-kube-api-access-hn96r\") pod \"redhat-operators-kfgcw\" (UID: \"ab1faf22-b706-4d66-909e-e1de1fe89b62\") " pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.597253 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1faf22-b706-4d66-909e-e1de1fe89b62-catalog-content\") pod \"redhat-operators-kfgcw\" (UID: \"ab1faf22-b706-4d66-909e-e1de1fe89b62\") " pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.597848 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1faf22-b706-4d66-909e-e1de1fe89b62-catalog-content\") pod \"redhat-operators-kfgcw\" (UID: \"ab1faf22-b706-4d66-909e-e1de1fe89b62\") " pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.598339 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1faf22-b706-4d66-909e-e1de1fe89b62-utilities\") pod \"redhat-operators-kfgcw\" (UID: \"ab1faf22-b706-4d66-909e-e1de1fe89b62\") " pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.622025 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn96r\" (UniqueName: \"kubernetes.io/projected/ab1faf22-b706-4d66-909e-e1de1fe89b62-kube-api-access-hn96r\") pod \"redhat-operators-kfgcw\" (UID: \"ab1faf22-b706-4d66-909e-e1de1fe89b62\") " pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.645313 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttw8s"] Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.681353 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ncvjt"] Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.682713 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.683062 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-x2lhj" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.704533 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ncvjt"] Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.720258 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.788695 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ckmh7\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.799929 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c7e742-337d-443c-a3e1-057300517c25-catalog-content\") pod \"redhat-operators-ncvjt\" (UID: \"14c7e742-337d-443c-a3e1-057300517c25\") " pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.800047 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbqxn\" (UniqueName: \"kubernetes.io/projected/14c7e742-337d-443c-a3e1-057300517c25-kube-api-access-hbqxn\") pod \"redhat-operators-ncvjt\" (UID: \"14c7e742-337d-443c-a3e1-057300517c25\") " pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.800075 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c7e742-337d-443c-a3e1-057300517c25-utilities\") pod \"redhat-operators-ncvjt\" (UID: \"14c7e742-337d-443c-a3e1-057300517c25\") " pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.823239 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.824287 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.840117 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.840881 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.850348 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.902933 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb31a43e-edbb-49c8-b662-b6252ea7c4db-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cb31a43e-edbb-49c8-b662-b6252ea7c4db\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.903161 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbqxn\" (UniqueName: \"kubernetes.io/projected/14c7e742-337d-443c-a3e1-057300517c25-kube-api-access-hbqxn\") pod \"redhat-operators-ncvjt\" (UID: \"14c7e742-337d-443c-a3e1-057300517c25\") " pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.903305 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c7e742-337d-443c-a3e1-057300517c25-utilities\") pod \"redhat-operators-ncvjt\" (UID: \"14c7e742-337d-443c-a3e1-057300517c25\") " pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.903396 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb31a43e-edbb-49c8-b662-b6252ea7c4db-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cb31a43e-edbb-49c8-b662-b6252ea7c4db\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.903484 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c7e742-337d-443c-a3e1-057300517c25-catalog-content\") pod \"redhat-operators-ncvjt\" (UID: \"14c7e742-337d-443c-a3e1-057300517c25\") " pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.903982 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c7e742-337d-443c-a3e1-057300517c25-catalog-content\") pod \"redhat-operators-ncvjt\" (UID: \"14c7e742-337d-443c-a3e1-057300517c25\") " pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.904509 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c7e742-337d-443c-a3e1-057300517c25-utilities\") pod \"redhat-operators-ncvjt\" (UID: \"14c7e742-337d-443c-a3e1-057300517c25\") " pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.910380 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:17 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:17 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:17 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.910432 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.962103 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbqxn\" (UniqueName: \"kubernetes.io/projected/14c7e742-337d-443c-a3e1-057300517c25-kube-api-access-hbqxn\") pod \"redhat-operators-ncvjt\" (UID: \"14c7e742-337d-443c-a3e1-057300517c25\") " pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:09:17 crc kubenswrapper[4797]: I0216 11:09:17.986055 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.009328 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb31a43e-edbb-49c8-b662-b6252ea7c4db-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cb31a43e-edbb-49c8-b662-b6252ea7c4db\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.010288 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb31a43e-edbb-49c8-b662-b6252ea7c4db-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cb31a43e-edbb-49c8-b662-b6252ea7c4db\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.010414 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb31a43e-edbb-49c8-b662-b6252ea7c4db-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cb31a43e-edbb-49c8-b662-b6252ea7c4db\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.066115 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.081310 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb31a43e-edbb-49c8-b662-b6252ea7c4db-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cb31a43e-edbb-49c8-b662-b6252ea7c4db\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.104471 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.290682 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.354721 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfgcw"] Feb 16 11:09:18 crc kubenswrapper[4797]: W0216 11:09:18.370304 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab1faf22_b706_4d66_909e_e1de1fe89b62.slice/crio-7693cf53aafc2080c9e5be39a3c435c91b5a048081877778a6e84bf934607b06 WatchSource:0}: Error finding container 7693cf53aafc2080c9e5be39a3c435c91b5a048081877778a6e84bf934607b06: Status 404 returned error can't find the container with id 7693cf53aafc2080c9e5be39a3c435c91b5a048081877778a6e84bf934607b06 Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.373343 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"0293617e491bb8062448457e842163a7e78a1e7507ad4e3055059dd2f3e79888"} Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.383724 4797 generic.go:334] "Generic (PLEG): container finished" podID="55d238b0-cdbe-48f8-afe0-4e163ad4b48a" containerID="96b318e13e08f0799ae7eb16ca3bca7c0956990ded8574f09cd7420fa3bc1d16" exitCode=0 Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.383823 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjf4q" event={"ID":"55d238b0-cdbe-48f8-afe0-4e163ad4b48a","Type":"ContainerDied","Data":"96b318e13e08f0799ae7eb16ca3bca7c0956990ded8574f09cd7420fa3bc1d16"} Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.385440 4797 generic.go:334] "Generic (PLEG): container finished" podID="4e1f1272-6892-4bc2-be85-30b6d08df6ec" containerID="38f9e492ece8ee387e940e07eea2fbae4395454b8909033943745da0aadbca3a" exitCode=0 Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.385503 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttw8s" event={"ID":"4e1f1272-6892-4bc2-be85-30b6d08df6ec","Type":"ContainerDied","Data":"38f9e492ece8ee387e940e07eea2fbae4395454b8909033943745da0aadbca3a"} Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.385540 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttw8s" event={"ID":"4e1f1272-6892-4bc2-be85-30b6d08df6ec","Type":"ContainerStarted","Data":"586febb0653ed3136ed8ea9a9fafd1b9b8b7a3513f445ac5290db9a231d877fc"} Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.400211 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ab04fecf13b31390e4622e501cd6ed5775c3cf8bbc06b00020e34fac1d293e16"} Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.402670 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.414649 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8468236aaa9e98e3bd543e422da94785ec033199d807fd0ab535c1e9e562900e"} Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.446162 4797 generic.go:334] "Generic (PLEG): container finished" podID="3f9dbba0-bbf1-49d8-84c2-158007da8a69" containerID="148765d7b86f5411c2ef70e3aa4779f382349e0f18d720939b244eb0a3ff56a1" exitCode=0 Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.446349 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p44df" event={"ID":"3f9dbba0-bbf1-49d8-84c2-158007da8a69","Type":"ContainerDied","Data":"148765d7b86f5411c2ef70e3aa4779f382349e0f18d720939b244eb0a3ff56a1"} Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.446394 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p44df" event={"ID":"3f9dbba0-bbf1-49d8-84c2-158007da8a69","Type":"ContainerStarted","Data":"ff88a00e84fd300ad403c26adddd1abef55481b59488a63ccae05081b765f941"} Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.469663 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" event={"ID":"261bff34-cd36-4214-880f-231fa0f1679b","Type":"ContainerStarted","Data":"87b472bf7e03ac51f9751a9df33c9a3d335a95531448eba1507c8c7748e8c91b"} Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.470277 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" event={"ID":"261bff34-cd36-4214-880f-231fa0f1679b","Type":"ContainerStarted","Data":"b61ce2b657717acaa542004079058706a45b56e3151814ec4b84aabd8ccb312a"} Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.566171 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" podStartSLOduration=12.566138502 podStartE2EDuration="12.566138502s" podCreationTimestamp="2026-02-16 11:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:18.558167796 +0000 UTC m=+153.278352796" watchObservedRunningTime="2026-02-16 11:09:18.566138502 +0000 UTC m=+153.286323482" Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.678726 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.711403 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ckmh7"] Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.775277 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ncvjt"] Feb 16 11:09:18 crc kubenswrapper[4797]: W0216 11:09:18.783268 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14c7e742_337d_443c_a3e1_057300517c25.slice/crio-4132a3de5a687256725d9ced49a90b9b11e11d5bbe5db8a056423e7a25a5dcc9 WatchSource:0}: Error finding container 4132a3de5a687256725d9ced49a90b9b11e11d5bbe5db8a056423e7a25a5dcc9: Status 404 returned error can't find the container with id 4132a3de5a687256725d9ced49a90b9b11e11d5bbe5db8a056423e7a25a5dcc9 Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.913066 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:18 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:18 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:18 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:18 crc kubenswrapper[4797]: I0216 11:09:18.913187 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.146212 4797 patch_prober.go:28] interesting pod/downloads-7954f5f757-dxttg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.146338 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-dxttg" podUID="8f0f2562-5ca4-414e-b8a4-d7ab61e9bc96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.146683 4797 patch_prober.go:28] interesting pod/downloads-7954f5f757-dxttg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.146767 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-dxttg" podUID="8f0f2562-5ca4-414e-b8a4-d7ab61e9bc96" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.33:8080/\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.175159 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.175211 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.177432 4797 patch_prober.go:28] interesting pod/console-f9d7485db-4d5np container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.177479 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4d5np" podUID="61891ace-57b4-446d-afb5-cec9848da89a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.311657 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.322590 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-mxqz2" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.404860 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.423727 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmmm4" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.488089 4797 generic.go:334] "Generic (PLEG): container finished" podID="7f673c7b-0916-4829-9630-1f927c932254" containerID="b939fafe32187235946cb441cc4979e645f2bba16ba046dd48c4fe719806a1d3" exitCode=0 Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.488166 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" event={"ID":"7f673c7b-0916-4829-9630-1f927c932254","Type":"ContainerDied","Data":"b939fafe32187235946cb441cc4979e645f2bba16ba046dd48c4fe719806a1d3"} Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.501208 4797 generic.go:334] "Generic (PLEG): container finished" podID="ab1faf22-b706-4d66-909e-e1de1fe89b62" containerID="64b6c5c5b02b3b966d5e7e7e0a319dd3face43097737a7641b17a679c63b3ba5" exitCode=0 Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.501284 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfgcw" event={"ID":"ab1faf22-b706-4d66-909e-e1de1fe89b62","Type":"ContainerDied","Data":"64b6c5c5b02b3b966d5e7e7e0a319dd3face43097737a7641b17a679c63b3ba5"} Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.501322 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfgcw" event={"ID":"ab1faf22-b706-4d66-909e-e1de1fe89b62","Type":"ContainerStarted","Data":"7693cf53aafc2080c9e5be39a3c435c91b5a048081877778a6e84bf934607b06"} Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.533383 4797 generic.go:334] "Generic (PLEG): container finished" podID="14c7e742-337d-443c-a3e1-057300517c25" containerID="3025ca615e542f24122389913825a81809db8d38c5950f3b93e89f50032bb6f4" exitCode=0 Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.533525 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncvjt" event={"ID":"14c7e742-337d-443c-a3e1-057300517c25","Type":"ContainerDied","Data":"3025ca615e542f24122389913825a81809db8d38c5950f3b93e89f50032bb6f4"} Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.533562 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncvjt" event={"ID":"14c7e742-337d-443c-a3e1-057300517c25","Type":"ContainerStarted","Data":"4132a3de5a687256725d9ced49a90b9b11e11d5bbe5db8a056423e7a25a5dcc9"} Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.552031 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" event={"ID":"d97ef757-b33f-4c9d-9a9b-758cf73ce40e","Type":"ContainerStarted","Data":"397f5299cdf6f15525015ec7a79f804c1afd854e4e995b11b45e29575ea463dc"} Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.552087 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" event={"ID":"d97ef757-b33f-4c9d-9a9b-758cf73ce40e","Type":"ContainerStarted","Data":"936f718bf104533277eba9fedb4d585b6baf3d4b75c71a35189dde329390f2fa"} Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.552204 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.554601 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cb31a43e-edbb-49c8-b662-b6252ea7c4db","Type":"ContainerStarted","Data":"56e528e5b6fa4290e4afed82a698d265f1b6dc51f1b40279f3632081e2425e83"} Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.554630 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cb31a43e-edbb-49c8-b662-b6252ea7c4db","Type":"ContainerStarted","Data":"a0e6c252b502392cf4a2d895218d20dc08c76eb863c71fa0cb54a11882506626"} Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.672633 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" podStartSLOduration=131.672616373 podStartE2EDuration="2m11.672616373s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:19.672243363 +0000 UTC m=+154.392428343" watchObservedRunningTime="2026-02-16 11:09:19.672616373 +0000 UTC m=+154.392801363" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.672775 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.672768857 podStartE2EDuration="2.672768857s" podCreationTimestamp="2026-02-16 11:09:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:19.615881905 +0000 UTC m=+154.336066885" watchObservedRunningTime="2026-02-16 11:09:19.672768857 +0000 UTC m=+154.392953857" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.900410 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.908291 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:19 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:19 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:19 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:19 crc kubenswrapper[4797]: I0216 11:09:19.908339 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.027124 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dnhj4" Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.291943 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.292703 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.301143 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.302351 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.303088 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.449640 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db4fc058-bba2-4ee0-807b-8b05d8a6ea91-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"db4fc058-bba2-4ee0-807b-8b05d8a6ea91\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.449734 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db4fc058-bba2-4ee0-807b-8b05d8a6ea91-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"db4fc058-bba2-4ee0-807b-8b05d8a6ea91\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.551287 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db4fc058-bba2-4ee0-807b-8b05d8a6ea91-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"db4fc058-bba2-4ee0-807b-8b05d8a6ea91\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.551350 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db4fc058-bba2-4ee0-807b-8b05d8a6ea91-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"db4fc058-bba2-4ee0-807b-8b05d8a6ea91\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.551387 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db4fc058-bba2-4ee0-807b-8b05d8a6ea91-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"db4fc058-bba2-4ee0-807b-8b05d8a6ea91\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.580736 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db4fc058-bba2-4ee0-807b-8b05d8a6ea91-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"db4fc058-bba2-4ee0-807b-8b05d8a6ea91\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.613676 4797 generic.go:334] "Generic (PLEG): container finished" podID="cb31a43e-edbb-49c8-b662-b6252ea7c4db" containerID="56e528e5b6fa4290e4afed82a698d265f1b6dc51f1b40279f3632081e2425e83" exitCode=0 Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.613784 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cb31a43e-edbb-49c8-b662-b6252ea7c4db","Type":"ContainerDied","Data":"56e528e5b6fa4290e4afed82a698d265f1b6dc51f1b40279f3632081e2425e83"} Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.622702 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.904043 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:20 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:20 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:20 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.904414 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:20 crc kubenswrapper[4797]: I0216 11:09:20.974773 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.064988 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f673c7b-0916-4829-9630-1f927c932254-config-volume\") pod \"7f673c7b-0916-4829-9630-1f927c932254\" (UID: \"7f673c7b-0916-4829-9630-1f927c932254\") " Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.065088 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ld54\" (UniqueName: \"kubernetes.io/projected/7f673c7b-0916-4829-9630-1f927c932254-kube-api-access-7ld54\") pod \"7f673c7b-0916-4829-9630-1f927c932254\" (UID: \"7f673c7b-0916-4829-9630-1f927c932254\") " Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.065137 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f673c7b-0916-4829-9630-1f927c932254-secret-volume\") pod \"7f673c7b-0916-4829-9630-1f927c932254\" (UID: \"7f673c7b-0916-4829-9630-1f927c932254\") " Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.073642 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f673c7b-0916-4829-9630-1f927c932254-kube-api-access-7ld54" (OuterVolumeSpecName: "kube-api-access-7ld54") pod "7f673c7b-0916-4829-9630-1f927c932254" (UID: "7f673c7b-0916-4829-9630-1f927c932254"). InnerVolumeSpecName "kube-api-access-7ld54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.073965 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f673c7b-0916-4829-9630-1f927c932254-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7f673c7b-0916-4829-9630-1f927c932254" (UID: "7f673c7b-0916-4829-9630-1f927c932254"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.083046 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f673c7b-0916-4829-9630-1f927c932254-config-volume" (OuterVolumeSpecName: "config-volume") pod "7f673c7b-0916-4829-9630-1f927c932254" (UID: "7f673c7b-0916-4829-9630-1f927c932254"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.167897 4797 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f673c7b-0916-4829-9630-1f927c932254-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.170191 4797 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f673c7b-0916-4829-9630-1f927c932254-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.170211 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ld54\" (UniqueName: \"kubernetes.io/projected/7f673c7b-0916-4829-9630-1f927c932254-kube-api-access-7ld54\") on node \"crc\" DevicePath \"\"" Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.184437 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.692943 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"db4fc058-bba2-4ee0-807b-8b05d8a6ea91","Type":"ContainerStarted","Data":"a4fa1d5dcd1f2128a509c75eb96255a8f127ab25b0ff8b4c735f3d25407b68f4"} Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.705783 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" event={"ID":"7f673c7b-0916-4829-9630-1f927c932254","Type":"ContainerDied","Data":"75c385b2d74966ff18b75888040a027b9520f64a598aefa5a2a0adc91561ec76"} Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.705820 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr" Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.705822 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75c385b2d74966ff18b75888040a027b9520f64a598aefa5a2a0adc91561ec76" Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.906855 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:21 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:21 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:21 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:21 crc kubenswrapper[4797]: I0216 11:09:21.906902 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.070062 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8xchc" Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.124873 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.302323 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb31a43e-edbb-49c8-b662-b6252ea7c4db-kube-api-access\") pod \"cb31a43e-edbb-49c8-b662-b6252ea7c4db\" (UID: \"cb31a43e-edbb-49c8-b662-b6252ea7c4db\") " Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.302367 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb31a43e-edbb-49c8-b662-b6252ea7c4db-kubelet-dir\") pod \"cb31a43e-edbb-49c8-b662-b6252ea7c4db\" (UID: \"cb31a43e-edbb-49c8-b662-b6252ea7c4db\") " Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.302540 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb31a43e-edbb-49c8-b662-b6252ea7c4db-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cb31a43e-edbb-49c8-b662-b6252ea7c4db" (UID: "cb31a43e-edbb-49c8-b662-b6252ea7c4db"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.302639 4797 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb31a43e-edbb-49c8-b662-b6252ea7c4db-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.309838 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb31a43e-edbb-49c8-b662-b6252ea7c4db-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cb31a43e-edbb-49c8-b662-b6252ea7c4db" (UID: "cb31a43e-edbb-49c8-b662-b6252ea7c4db"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.404419 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cb31a43e-edbb-49c8-b662-b6252ea7c4db-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.727757 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"cb31a43e-edbb-49c8-b662-b6252ea7c4db","Type":"ContainerDied","Data":"a0e6c252b502392cf4a2d895218d20dc08c76eb863c71fa0cb54a11882506626"} Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.727817 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0e6c252b502392cf4a2d895218d20dc08c76eb863c71fa0cb54a11882506626" Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.727876 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.729674 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"db4fc058-bba2-4ee0-807b-8b05d8a6ea91","Type":"ContainerStarted","Data":"47cecf93bb5f9dbc430d16a9157fe21f6a5732ce534121fe31867ece1f9a7109"} Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.750454 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.750434113 podStartE2EDuration="2.750434113s" podCreationTimestamp="2026-02-16 11:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:22.746644639 +0000 UTC m=+157.466829639" watchObservedRunningTime="2026-02-16 11:09:22.750434113 +0000 UTC m=+157.470619093" Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.903063 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:22 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:22 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:22 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:22 crc kubenswrapper[4797]: I0216 11:09:22.903160 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:23 crc kubenswrapper[4797]: I0216 11:09:23.794682 4797 generic.go:334] "Generic (PLEG): container finished" podID="db4fc058-bba2-4ee0-807b-8b05d8a6ea91" containerID="47cecf93bb5f9dbc430d16a9157fe21f6a5732ce534121fe31867ece1f9a7109" exitCode=0 Feb 16 11:09:23 crc kubenswrapper[4797]: I0216 11:09:23.794732 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"db4fc058-bba2-4ee0-807b-8b05d8a6ea91","Type":"ContainerDied","Data":"47cecf93bb5f9dbc430d16a9157fe21f6a5732ce534121fe31867ece1f9a7109"} Feb 16 11:09:23 crc kubenswrapper[4797]: I0216 11:09:23.904044 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:23 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:23 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:23 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:23 crc kubenswrapper[4797]: I0216 11:09:23.904111 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:24 crc kubenswrapper[4797]: I0216 11:09:24.903483 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:24 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:24 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:24 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:24 crc kubenswrapper[4797]: I0216 11:09:24.903566 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:25 crc kubenswrapper[4797]: I0216 11:09:25.322148 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:09:25 crc kubenswrapper[4797]: I0216 11:09:25.907630 4797 patch_prober.go:28] interesting pod/router-default-5444994796-hmhhf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 11:09:25 crc kubenswrapper[4797]: [-]has-synced failed: reason withheld Feb 16 11:09:25 crc kubenswrapper[4797]: [+]process-running ok Feb 16 11:09:25 crc kubenswrapper[4797]: healthz check failed Feb 16 11:09:25 crc kubenswrapper[4797]: I0216 11:09:25.907897 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hmhhf" podUID="c687cb5b-f367-4bba-b59a-bbe77beee146" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 11:09:26 crc kubenswrapper[4797]: I0216 11:09:26.902223 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:26 crc kubenswrapper[4797]: I0216 11:09:26.904121 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hmhhf" Feb 16 11:09:29 crc kubenswrapper[4797]: I0216 11:09:29.152784 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-dxttg" Feb 16 11:09:29 crc kubenswrapper[4797]: I0216 11:09:29.363716 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:29 crc kubenswrapper[4797]: I0216 11:09:29.368516 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:09:31 crc kubenswrapper[4797]: I0216 11:09:31.043958 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:09:31 crc kubenswrapper[4797]: I0216 11:09:31.051973 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1f19a4ae-a737-4818-82b5-db20cafd45c7-metrics-certs\") pod \"network-metrics-daemon-cglwk\" (UID: \"1f19a4ae-a737-4818-82b5-db20cafd45c7\") " pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:09:31 crc kubenswrapper[4797]: I0216 11:09:31.242144 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-cglwk" Feb 16 11:09:34 crc kubenswrapper[4797]: I0216 11:09:34.790354 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 11:09:34 crc kubenswrapper[4797]: I0216 11:09:34.872390 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"db4fc058-bba2-4ee0-807b-8b05d8a6ea91","Type":"ContainerDied","Data":"a4fa1d5dcd1f2128a509c75eb96255a8f127ab25b0ff8b4c735f3d25407b68f4"} Feb 16 11:09:34 crc kubenswrapper[4797]: I0216 11:09:34.872442 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4fa1d5dcd1f2128a509c75eb96255a8f127ab25b0ff8b4c735f3d25407b68f4" Feb 16 11:09:34 crc kubenswrapper[4797]: I0216 11:09:34.872445 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 11:09:34 crc kubenswrapper[4797]: I0216 11:09:34.900942 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db4fc058-bba2-4ee0-807b-8b05d8a6ea91-kubelet-dir\") pod \"db4fc058-bba2-4ee0-807b-8b05d8a6ea91\" (UID: \"db4fc058-bba2-4ee0-807b-8b05d8a6ea91\") " Feb 16 11:09:34 crc kubenswrapper[4797]: I0216 11:09:34.901039 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db4fc058-bba2-4ee0-807b-8b05d8a6ea91-kube-api-access\") pod \"db4fc058-bba2-4ee0-807b-8b05d8a6ea91\" (UID: \"db4fc058-bba2-4ee0-807b-8b05d8a6ea91\") " Feb 16 11:09:34 crc kubenswrapper[4797]: I0216 11:09:34.901082 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db4fc058-bba2-4ee0-807b-8b05d8a6ea91-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "db4fc058-bba2-4ee0-807b-8b05d8a6ea91" (UID: "db4fc058-bba2-4ee0-807b-8b05d8a6ea91"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:09:34 crc kubenswrapper[4797]: I0216 11:09:34.901708 4797 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db4fc058-bba2-4ee0-807b-8b05d8a6ea91-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 11:09:34 crc kubenswrapper[4797]: I0216 11:09:34.905437 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db4fc058-bba2-4ee0-807b-8b05d8a6ea91-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "db4fc058-bba2-4ee0-807b-8b05d8a6ea91" (UID: "db4fc058-bba2-4ee0-807b-8b05d8a6ea91"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:09:35 crc kubenswrapper[4797]: I0216 11:09:35.004147 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db4fc058-bba2-4ee0-807b-8b05d8a6ea91-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 11:09:37 crc kubenswrapper[4797]: I0216 11:09:37.994393 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:09:41 crc kubenswrapper[4797]: I0216 11:09:41.704019 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:09:41 crc kubenswrapper[4797]: I0216 11:09:41.704524 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:09:41 crc kubenswrapper[4797]: I0216 11:09:41.864750 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-cglwk"] Feb 16 11:09:45 crc kubenswrapper[4797]: I0216 11:09:45.935886 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cglwk" event={"ID":"1f19a4ae-a737-4818-82b5-db20cafd45c7","Type":"ContainerStarted","Data":"3cad3ac9a712978cbfeaaf4ceac59ba6c3aee547f9365e9c633dfd1d644a6a84"} Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.942765 4797 generic.go:334] "Generic (PLEG): container finished" podID="55d238b0-cdbe-48f8-afe0-4e163ad4b48a" containerID="905771fc68495d051f0f7a7fefa3b3974bff5e94376bdb86800c848049b61b6b" exitCode=0 Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.943112 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjf4q" event={"ID":"55d238b0-cdbe-48f8-afe0-4e163ad4b48a","Type":"ContainerDied","Data":"905771fc68495d051f0f7a7fefa3b3974bff5e94376bdb86800c848049b61b6b"} Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.950537 4797 generic.go:334] "Generic (PLEG): container finished" podID="15fec828-6337-4c27-93ca-4b022a74486e" containerID="819a179c477ae32419510be5b216903a61a162a4059142e44c7cc9a1114659d9" exitCode=0 Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.950629 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndqp" event={"ID":"15fec828-6337-4c27-93ca-4b022a74486e","Type":"ContainerDied","Data":"819a179c477ae32419510be5b216903a61a162a4059142e44c7cc9a1114659d9"} Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.952030 4797 generic.go:334] "Generic (PLEG): container finished" podID="4e1f1272-6892-4bc2-be85-30b6d08df6ec" containerID="04434630e6b06d66377abb41f9936e01e2397d749bfaa1b95ceb06db15b161cb" exitCode=0 Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.952110 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttw8s" event={"ID":"4e1f1272-6892-4bc2-be85-30b6d08df6ec","Type":"ContainerDied","Data":"04434630e6b06d66377abb41f9936e01e2397d749bfaa1b95ceb06db15b161cb"} Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.954101 4797 generic.go:334] "Generic (PLEG): container finished" podID="3f9dbba0-bbf1-49d8-84c2-158007da8a69" containerID="148163e61071b38c776e7c69f324662ba4a1e3f84778e0f966a587c993b17865" exitCode=0 Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.954183 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p44df" event={"ID":"3f9dbba0-bbf1-49d8-84c2-158007da8a69","Type":"ContainerDied","Data":"148163e61071b38c776e7c69f324662ba4a1e3f84778e0f966a587c993b17865"} Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.957911 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfgcw" event={"ID":"ab1faf22-b706-4d66-909e-e1de1fe89b62","Type":"ContainerStarted","Data":"7208f1606bdf1a9f0e60f9b65a338b14b2e5ed41aba22136a4d328a138f79224"} Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.962103 4797 generic.go:334] "Generic (PLEG): container finished" podID="d8ffede4-813c-406a-9590-79f745ef4283" containerID="b4a7c17002aeb6e326cb086fd75b6a47cb73a5aa5ec23c036f47e4ab08709140" exitCode=0 Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.962162 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nrhk" event={"ID":"d8ffede4-813c-406a-9590-79f745ef4283","Type":"ContainerDied","Data":"b4a7c17002aeb6e326cb086fd75b6a47cb73a5aa5ec23c036f47e4ab08709140"} Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.965457 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncvjt" event={"ID":"14c7e742-337d-443c-a3e1-057300517c25","Type":"ContainerStarted","Data":"d742513b018e9e2cf9a98c3e0ded2b8b07def0f3cc82d9abfea60b21f67e0ace"} Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.969228 4797 generic.go:334] "Generic (PLEG): container finished" podID="3cf1530b-a55a-41f5-bffc-b2094a0e8746" containerID="df241dd09d8d3d0d7854e93562198cfd0c278027b5a971a2f08c2fbe748252ed" exitCode=0 Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.969366 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shcfr" event={"ID":"3cf1530b-a55a-41f5-bffc-b2094a0e8746","Type":"ContainerDied","Data":"df241dd09d8d3d0d7854e93562198cfd0c278027b5a971a2f08c2fbe748252ed"} Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.974982 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cglwk" event={"ID":"1f19a4ae-a737-4818-82b5-db20cafd45c7","Type":"ContainerStarted","Data":"6c53163df8ff2fa635e2549ac0f7c09c59254d5276ea18c0e13cc7901f7807df"} Feb 16 11:09:46 crc kubenswrapper[4797]: I0216 11:09:46.975025 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-cglwk" event={"ID":"1f19a4ae-a737-4818-82b5-db20cafd45c7","Type":"ContainerStarted","Data":"a5434be248a41595ae6d0d40d2c6bb3fe192422500651cac97db4380a5f24868"} Feb 16 11:09:47 crc kubenswrapper[4797]: I0216 11:09:47.107810 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-cglwk" podStartSLOduration=159.107787546 podStartE2EDuration="2m39.107787546s" podCreationTimestamp="2026-02-16 11:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:09:47.104557798 +0000 UTC m=+181.824742778" watchObservedRunningTime="2026-02-16 11:09:47.107787546 +0000 UTC m=+181.827972526" Feb 16 11:09:47 crc kubenswrapper[4797]: I0216 11:09:47.732625 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nj877"] Feb 16 11:09:47 crc kubenswrapper[4797]: I0216 11:09:47.984379 4797 generic.go:334] "Generic (PLEG): container finished" podID="ab1faf22-b706-4d66-909e-e1de1fe89b62" containerID="7208f1606bdf1a9f0e60f9b65a338b14b2e5ed41aba22136a4d328a138f79224" exitCode=0 Feb 16 11:09:47 crc kubenswrapper[4797]: I0216 11:09:47.990795 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfgcw" event={"ID":"ab1faf22-b706-4d66-909e-e1de1fe89b62","Type":"ContainerDied","Data":"7208f1606bdf1a9f0e60f9b65a338b14b2e5ed41aba22136a4d328a138f79224"} Feb 16 11:09:47 crc kubenswrapper[4797]: I0216 11:09:47.990922 4797 generic.go:334] "Generic (PLEG): container finished" podID="14c7e742-337d-443c-a3e1-057300517c25" containerID="d742513b018e9e2cf9a98c3e0ded2b8b07def0f3cc82d9abfea60b21f67e0ace" exitCode=0 Feb 16 11:09:47 crc kubenswrapper[4797]: I0216 11:09:47.991023 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncvjt" event={"ID":"14c7e742-337d-443c-a3e1-057300517c25","Type":"ContainerDied","Data":"d742513b018e9e2cf9a98c3e0ded2b8b07def0f3cc82d9abfea60b21f67e0ace"} Feb 16 11:09:49 crc kubenswrapper[4797]: I0216 11:09:49.701678 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4w9q9" Feb 16 11:09:50 crc kubenswrapper[4797]: I0216 11:09:50.012097 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p44df" event={"ID":"3f9dbba0-bbf1-49d8-84c2-158007da8a69","Type":"ContainerStarted","Data":"38aa2f337f4c447ea3f0c8f8af0b3211381c02ad9c332b2e11320d710bb9ad0a"} Feb 16 11:09:50 crc kubenswrapper[4797]: I0216 11:09:50.035287 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p44df" podStartSLOduration=4.328809248 podStartE2EDuration="35.035268228s" podCreationTimestamp="2026-02-16 11:09:15 +0000 UTC" firstStartedPulling="2026-02-16 11:09:18.449367775 +0000 UTC m=+153.169552755" lastFinishedPulling="2026-02-16 11:09:49.155826755 +0000 UTC m=+183.876011735" observedRunningTime="2026-02-16 11:09:50.031422784 +0000 UTC m=+184.751607764" watchObservedRunningTime="2026-02-16 11:09:50.035268228 +0000 UTC m=+184.755453208" Feb 16 11:09:51 crc kubenswrapper[4797]: I0216 11:09:51.043485 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjf4q" event={"ID":"55d238b0-cdbe-48f8-afe0-4e163ad4b48a","Type":"ContainerStarted","Data":"386ff8622825d9fd556b2a4ae96c52990e39f8ff6240e0d2b29a66e0ea8a9f89"} Feb 16 11:09:52 crc kubenswrapper[4797]: I0216 11:09:52.072212 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kjf4q" podStartSLOduration=6.156397764 podStartE2EDuration="38.072193946s" podCreationTimestamp="2026-02-16 11:09:14 +0000 UTC" firstStartedPulling="2026-02-16 11:09:18.386327085 +0000 UTC m=+153.106512065" lastFinishedPulling="2026-02-16 11:09:50.302123267 +0000 UTC m=+185.022308247" observedRunningTime="2026-02-16 11:09:52.06752736 +0000 UTC m=+186.787712340" watchObservedRunningTime="2026-02-16 11:09:52.072193946 +0000 UTC m=+186.792378926" Feb 16 11:09:53 crc kubenswrapper[4797]: I0216 11:09:53.055874 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nrhk" event={"ID":"d8ffede4-813c-406a-9590-79f745ef4283","Type":"ContainerStarted","Data":"5e58c14d8651bb36543f81e0db1b8cb2b8968c7b31900dba03938199154b18b7"} Feb 16 11:09:53 crc kubenswrapper[4797]: I0216 11:09:53.080957 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5nrhk" podStartSLOduration=4.521691878 podStartE2EDuration="39.080943047s" podCreationTimestamp="2026-02-16 11:09:14 +0000 UTC" firstStartedPulling="2026-02-16 11:09:17.197127481 +0000 UTC m=+151.917312451" lastFinishedPulling="2026-02-16 11:09:51.75637864 +0000 UTC m=+186.476563620" observedRunningTime="2026-02-16 11:09:53.080540635 +0000 UTC m=+187.800725615" watchObservedRunningTime="2026-02-16 11:09:53.080943047 +0000 UTC m=+187.801128027" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.063408 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfgcw" event={"ID":"ab1faf22-b706-4d66-909e-e1de1fe89b62","Type":"ContainerStarted","Data":"aa598df9c6358def2c9c6f8c1492aa8ad81bd6e01150f4a72cc258751111c8b6"} Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.065721 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncvjt" event={"ID":"14c7e742-337d-443c-a3e1-057300517c25","Type":"ContainerStarted","Data":"9935ca864be4ac2a433a1bda6bb0ac42758f2188a5fcd01d3c68d0d987986e69"} Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.067838 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndqp" event={"ID":"15fec828-6337-4c27-93ca-4b022a74486e","Type":"ContainerStarted","Data":"3e7c0c22f84c4c295fbbac634618737e3fca37d2be50b775046f7eb43079e0ef"} Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.069641 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttw8s" event={"ID":"4e1f1272-6892-4bc2-be85-30b6d08df6ec","Type":"ContainerStarted","Data":"58aebc180624bc0a0bf39def8fd97de377fe3b40b52a2ebfeea29d691d28160a"} Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.071632 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shcfr" event={"ID":"3cf1530b-a55a-41f5-bffc-b2094a0e8746","Type":"ContainerStarted","Data":"16115c90f478d602be583193b2730eb08145dd796292a9d10ad8b08cabde0b16"} Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.088212 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kfgcw" podStartSLOduration=3.894247327 podStartE2EDuration="37.088192415s" podCreationTimestamp="2026-02-16 11:09:17 +0000 UTC" firstStartedPulling="2026-02-16 11:09:19.52869525 +0000 UTC m=+154.248880230" lastFinishedPulling="2026-02-16 11:09:52.722640328 +0000 UTC m=+187.442825318" observedRunningTime="2026-02-16 11:09:54.083511418 +0000 UTC m=+188.803696398" watchObservedRunningTime="2026-02-16 11:09:54.088192415 +0000 UTC m=+188.808377385" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.108972 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ncvjt" podStartSLOduration=3.5183287720000003 podStartE2EDuration="37.108948479s" podCreationTimestamp="2026-02-16 11:09:17 +0000 UTC" firstStartedPulling="2026-02-16 11:09:19.537445827 +0000 UTC m=+154.257630807" lastFinishedPulling="2026-02-16 11:09:53.128065534 +0000 UTC m=+187.848250514" observedRunningTime="2026-02-16 11:09:54.107127659 +0000 UTC m=+188.827312639" watchObservedRunningTime="2026-02-16 11:09:54.108948479 +0000 UTC m=+188.829133459" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.140021 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jndqp" podStartSLOduration=4.118226571 podStartE2EDuration="41.140001011s" podCreationTimestamp="2026-02-16 11:09:13 +0000 UTC" firstStartedPulling="2026-02-16 11:09:16.140221305 +0000 UTC m=+150.860406285" lastFinishedPulling="2026-02-16 11:09:53.161995745 +0000 UTC m=+187.882180725" observedRunningTime="2026-02-16 11:09:54.133275208 +0000 UTC m=+188.853460188" watchObservedRunningTime="2026-02-16 11:09:54.140001011 +0000 UTC m=+188.860185991" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.167220 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-shcfr" podStartSLOduration=4.238582718 podStartE2EDuration="40.167201808s" podCreationTimestamp="2026-02-16 11:09:14 +0000 UTC" firstStartedPulling="2026-02-16 11:09:17.378713196 +0000 UTC m=+152.098898176" lastFinishedPulling="2026-02-16 11:09:53.307332286 +0000 UTC m=+188.027517266" observedRunningTime="2026-02-16 11:09:54.167042944 +0000 UTC m=+188.887227914" watchObservedRunningTime="2026-02-16 11:09:54.167201808 +0000 UTC m=+188.887386788" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.556555 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.556690 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.810456 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.810497 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.857251 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.857294 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.908377 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.909037 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.910127 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.932710 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ttw8s" podStartSLOduration=4.078467692 podStartE2EDuration="38.932687161s" podCreationTimestamp="2026-02-16 11:09:16 +0000 UTC" firstStartedPulling="2026-02-16 11:09:18.388545646 +0000 UTC m=+153.108730626" lastFinishedPulling="2026-02-16 11:09:53.242765115 +0000 UTC m=+187.962950095" observedRunningTime="2026-02-16 11:09:54.191633201 +0000 UTC m=+188.911818191" watchObservedRunningTime="2026-02-16 11:09:54.932687161 +0000 UTC m=+189.652872141" Feb 16 11:09:54 crc kubenswrapper[4797]: I0216 11:09:54.946935 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:55 crc kubenswrapper[4797]: I0216 11:09:55.017411 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 11:09:55 crc kubenswrapper[4797]: I0216 11:09:55.125014 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:09:55 crc kubenswrapper[4797]: I0216 11:09:55.715794 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jndqp" podUID="15fec828-6337-4c27-93ca-4b022a74486e" containerName="registry-server" probeResult="failure" output=< Feb 16 11:09:55 crc kubenswrapper[4797]: timeout: failed to connect service ":50051" within 1s Feb 16 11:09:55 crc kubenswrapper[4797]: > Feb 16 11:09:55 crc kubenswrapper[4797]: I0216 11:09:55.853865 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-shcfr" podUID="3cf1530b-a55a-41f5-bffc-b2094a0e8746" containerName="registry-server" probeResult="failure" output=< Feb 16 11:09:55 crc kubenswrapper[4797]: timeout: failed to connect service ":50051" within 1s Feb 16 11:09:55 crc kubenswrapper[4797]: > Feb 16 11:09:56 crc kubenswrapper[4797]: I0216 11:09:56.550942 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:56 crc kubenswrapper[4797]: I0216 11:09:56.551316 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:56 crc kubenswrapper[4797]: I0216 11:09:56.588225 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:56 crc kubenswrapper[4797]: I0216 11:09:56.764935 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:56 crc kubenswrapper[4797]: I0216 11:09:56.764986 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:56 crc kubenswrapper[4797]: I0216 11:09:56.801325 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:09:57 crc kubenswrapper[4797]: I0216 11:09:57.147169 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:09:57 crc kubenswrapper[4797]: I0216 11:09:57.722314 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:09:57 crc kubenswrapper[4797]: I0216 11:09:57.722362 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:09:58 crc kubenswrapper[4797]: I0216 11:09:58.066754 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:09:58 crc kubenswrapper[4797]: I0216 11:09:58.066809 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:09:58 crc kubenswrapper[4797]: I0216 11:09:58.772744 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kfgcw" podUID="ab1faf22-b706-4d66-909e-e1de1fe89b62" containerName="registry-server" probeResult="failure" output=< Feb 16 11:09:58 crc kubenswrapper[4797]: timeout: failed to connect service ":50051" within 1s Feb 16 11:09:58 crc kubenswrapper[4797]: > Feb 16 11:09:59 crc kubenswrapper[4797]: I0216 11:09:59.104884 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ncvjt" podUID="14c7e742-337d-443c-a3e1-057300517c25" containerName="registry-server" probeResult="failure" output=< Feb 16 11:09:59 crc kubenswrapper[4797]: timeout: failed to connect service ":50051" within 1s Feb 16 11:09:59 crc kubenswrapper[4797]: > Feb 16 11:09:59 crc kubenswrapper[4797]: I0216 11:09:59.351283 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjf4q"] Feb 16 11:09:59 crc kubenswrapper[4797]: I0216 11:09:59.351540 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kjf4q" podUID="55d238b0-cdbe-48f8-afe0-4e163ad4b48a" containerName="registry-server" containerID="cri-o://386ff8622825d9fd556b2a4ae96c52990e39f8ff6240e0d2b29a66e0ea8a9f89" gracePeriod=2 Feb 16 11:10:00 crc kubenswrapper[4797]: I0216 11:10:00.945307 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.056656 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-catalog-content\") pod \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\" (UID: \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\") " Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.056744 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7nj4\" (UniqueName: \"kubernetes.io/projected/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-kube-api-access-f7nj4\") pod \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\" (UID: \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\") " Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.056792 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-utilities\") pod \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\" (UID: \"55d238b0-cdbe-48f8-afe0-4e163ad4b48a\") " Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.058103 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-utilities" (OuterVolumeSpecName: "utilities") pod "55d238b0-cdbe-48f8-afe0-4e163ad4b48a" (UID: "55d238b0-cdbe-48f8-afe0-4e163ad4b48a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.077521 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-kube-api-access-f7nj4" (OuterVolumeSpecName: "kube-api-access-f7nj4") pod "55d238b0-cdbe-48f8-afe0-4e163ad4b48a" (UID: "55d238b0-cdbe-48f8-afe0-4e163ad4b48a"). InnerVolumeSpecName "kube-api-access-f7nj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.110970 4797 generic.go:334] "Generic (PLEG): container finished" podID="55d238b0-cdbe-48f8-afe0-4e163ad4b48a" containerID="386ff8622825d9fd556b2a4ae96c52990e39f8ff6240e0d2b29a66e0ea8a9f89" exitCode=0 Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.111021 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjf4q" event={"ID":"55d238b0-cdbe-48f8-afe0-4e163ad4b48a","Type":"ContainerDied","Data":"386ff8622825d9fd556b2a4ae96c52990e39f8ff6240e0d2b29a66e0ea8a9f89"} Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.111052 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kjf4q" event={"ID":"55d238b0-cdbe-48f8-afe0-4e163ad4b48a","Type":"ContainerDied","Data":"741ca22a909ec328f5c10470e002fba5cc0cc921b62fdfd42bd8a5067c8e66e5"} Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.111072 4797 scope.go:117] "RemoveContainer" containerID="386ff8622825d9fd556b2a4ae96c52990e39f8ff6240e0d2b29a66e0ea8a9f89" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.111069 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kjf4q" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.124861 4797 scope.go:117] "RemoveContainer" containerID="905771fc68495d051f0f7a7fefa3b3974bff5e94376bdb86800c848049b61b6b" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.125093 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55d238b0-cdbe-48f8-afe0-4e163ad4b48a" (UID: "55d238b0-cdbe-48f8-afe0-4e163ad4b48a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.143610 4797 scope.go:117] "RemoveContainer" containerID="96b318e13e08f0799ae7eb16ca3bca7c0956990ded8574f09cd7420fa3bc1d16" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.157328 4797 scope.go:117] "RemoveContainer" containerID="386ff8622825d9fd556b2a4ae96c52990e39f8ff6240e0d2b29a66e0ea8a9f89" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.157767 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.157796 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7nj4\" (UniqueName: \"kubernetes.io/projected/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-kube-api-access-f7nj4\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.157808 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55d238b0-cdbe-48f8-afe0-4e163ad4b48a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:01 crc kubenswrapper[4797]: E0216 11:10:01.159094 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"386ff8622825d9fd556b2a4ae96c52990e39f8ff6240e0d2b29a66e0ea8a9f89\": container with ID starting with 386ff8622825d9fd556b2a4ae96c52990e39f8ff6240e0d2b29a66e0ea8a9f89 not found: ID does not exist" containerID="386ff8622825d9fd556b2a4ae96c52990e39f8ff6240e0d2b29a66e0ea8a9f89" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.159131 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"386ff8622825d9fd556b2a4ae96c52990e39f8ff6240e0d2b29a66e0ea8a9f89"} err="failed to get container status \"386ff8622825d9fd556b2a4ae96c52990e39f8ff6240e0d2b29a66e0ea8a9f89\": rpc error: code = NotFound desc = could not find container \"386ff8622825d9fd556b2a4ae96c52990e39f8ff6240e0d2b29a66e0ea8a9f89\": container with ID starting with 386ff8622825d9fd556b2a4ae96c52990e39f8ff6240e0d2b29a66e0ea8a9f89 not found: ID does not exist" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.159179 4797 scope.go:117] "RemoveContainer" containerID="905771fc68495d051f0f7a7fefa3b3974bff5e94376bdb86800c848049b61b6b" Feb 16 11:10:01 crc kubenswrapper[4797]: E0216 11:10:01.159738 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"905771fc68495d051f0f7a7fefa3b3974bff5e94376bdb86800c848049b61b6b\": container with ID starting with 905771fc68495d051f0f7a7fefa3b3974bff5e94376bdb86800c848049b61b6b not found: ID does not exist" containerID="905771fc68495d051f0f7a7fefa3b3974bff5e94376bdb86800c848049b61b6b" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.159787 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"905771fc68495d051f0f7a7fefa3b3974bff5e94376bdb86800c848049b61b6b"} err="failed to get container status \"905771fc68495d051f0f7a7fefa3b3974bff5e94376bdb86800c848049b61b6b\": rpc error: code = NotFound desc = could not find container \"905771fc68495d051f0f7a7fefa3b3974bff5e94376bdb86800c848049b61b6b\": container with ID starting with 905771fc68495d051f0f7a7fefa3b3974bff5e94376bdb86800c848049b61b6b not found: ID does not exist" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.159804 4797 scope.go:117] "RemoveContainer" containerID="96b318e13e08f0799ae7eb16ca3bca7c0956990ded8574f09cd7420fa3bc1d16" Feb 16 11:10:01 crc kubenswrapper[4797]: E0216 11:10:01.160319 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96b318e13e08f0799ae7eb16ca3bca7c0956990ded8574f09cd7420fa3bc1d16\": container with ID starting with 96b318e13e08f0799ae7eb16ca3bca7c0956990ded8574f09cd7420fa3bc1d16 not found: ID does not exist" containerID="96b318e13e08f0799ae7eb16ca3bca7c0956990ded8574f09cd7420fa3bc1d16" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.160366 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96b318e13e08f0799ae7eb16ca3bca7c0956990ded8574f09cd7420fa3bc1d16"} err="failed to get container status \"96b318e13e08f0799ae7eb16ca3bca7c0956990ded8574f09cd7420fa3bc1d16\": rpc error: code = NotFound desc = could not find container \"96b318e13e08f0799ae7eb16ca3bca7c0956990ded8574f09cd7420fa3bc1d16\": container with ID starting with 96b318e13e08f0799ae7eb16ca3bca7c0956990ded8574f09cd7420fa3bc1d16 not found: ID does not exist" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.281551 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 11:10:01 crc kubenswrapper[4797]: E0216 11:10:01.281903 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d238b0-cdbe-48f8-afe0-4e163ad4b48a" containerName="extract-utilities" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.281924 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d238b0-cdbe-48f8-afe0-4e163ad4b48a" containerName="extract-utilities" Feb 16 11:10:01 crc kubenswrapper[4797]: E0216 11:10:01.281951 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb31a43e-edbb-49c8-b662-b6252ea7c4db" containerName="pruner" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.281962 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb31a43e-edbb-49c8-b662-b6252ea7c4db" containerName="pruner" Feb 16 11:10:01 crc kubenswrapper[4797]: E0216 11:10:01.281980 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db4fc058-bba2-4ee0-807b-8b05d8a6ea91" containerName="pruner" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.281991 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="db4fc058-bba2-4ee0-807b-8b05d8a6ea91" containerName="pruner" Feb 16 11:10:01 crc kubenswrapper[4797]: E0216 11:10:01.282013 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d238b0-cdbe-48f8-afe0-4e163ad4b48a" containerName="registry-server" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.282025 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d238b0-cdbe-48f8-afe0-4e163ad4b48a" containerName="registry-server" Feb 16 11:10:01 crc kubenswrapper[4797]: E0216 11:10:01.282042 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55d238b0-cdbe-48f8-afe0-4e163ad4b48a" containerName="extract-content" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.282053 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="55d238b0-cdbe-48f8-afe0-4e163ad4b48a" containerName="extract-content" Feb 16 11:10:01 crc kubenswrapper[4797]: E0216 11:10:01.282070 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f673c7b-0916-4829-9630-1f927c932254" containerName="collect-profiles" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.282083 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f673c7b-0916-4829-9630-1f927c932254" containerName="collect-profiles" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.282270 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="55d238b0-cdbe-48f8-afe0-4e163ad4b48a" containerName="registry-server" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.282287 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb31a43e-edbb-49c8-b662-b6252ea7c4db" containerName="pruner" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.282301 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f673c7b-0916-4829-9630-1f927c932254" containerName="collect-profiles" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.282332 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="db4fc058-bba2-4ee0-807b-8b05d8a6ea91" containerName="pruner" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.282939 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.289290 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.289621 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.294060 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.360118 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d11a28f3-fbb7-41dd-887a-66b7ddd99085-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d11a28f3-fbb7-41dd-887a-66b7ddd99085\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.360178 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d11a28f3-fbb7-41dd-887a-66b7ddd99085-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d11a28f3-fbb7-41dd-887a-66b7ddd99085\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.438343 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kjf4q"] Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.442151 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kjf4q"] Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.461094 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d11a28f3-fbb7-41dd-887a-66b7ddd99085-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d11a28f3-fbb7-41dd-887a-66b7ddd99085\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.461179 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d11a28f3-fbb7-41dd-887a-66b7ddd99085-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d11a28f3-fbb7-41dd-887a-66b7ddd99085\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.461246 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d11a28f3-fbb7-41dd-887a-66b7ddd99085-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d11a28f3-fbb7-41dd-887a-66b7ddd99085\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.480856 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d11a28f3-fbb7-41dd-887a-66b7ddd99085-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d11a28f3-fbb7-41dd-887a-66b7ddd99085\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.611653 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 11:10:01 crc kubenswrapper[4797]: I0216 11:10:01.988774 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55d238b0-cdbe-48f8-afe0-4e163ad4b48a" path="/var/lib/kubelet/pods/55d238b0-cdbe-48f8-afe0-4e163ad4b48a/volumes" Feb 16 11:10:02 crc kubenswrapper[4797]: I0216 11:10:02.073083 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 11:10:02 crc kubenswrapper[4797]: I0216 11:10:02.117348 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d11a28f3-fbb7-41dd-887a-66b7ddd99085","Type":"ContainerStarted","Data":"1ed0393ff334b43d80df25195a62d3d9e08b1c2cd25305e6f704d701762c818a"} Feb 16 11:10:03 crc kubenswrapper[4797]: I0216 11:10:03.124068 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d11a28f3-fbb7-41dd-887a-66b7ddd99085","Type":"ContainerStarted","Data":"d31e40accf961afcbcee30a77507c74a1d82440b444174033bdd6164d4cb0d74"} Feb 16 11:10:03 crc kubenswrapper[4797]: I0216 11:10:03.142373 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.142345796 podStartE2EDuration="2.142345796s" podCreationTimestamp="2026-02-16 11:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:10:03.139230367 +0000 UTC m=+197.859415377" watchObservedRunningTime="2026-02-16 11:10:03.142345796 +0000 UTC m=+197.862530816" Feb 16 11:10:04 crc kubenswrapper[4797]: I0216 11:10:04.134976 4797 generic.go:334] "Generic (PLEG): container finished" podID="d11a28f3-fbb7-41dd-887a-66b7ddd99085" containerID="d31e40accf961afcbcee30a77507c74a1d82440b444174033bdd6164d4cb0d74" exitCode=0 Feb 16 11:10:04 crc kubenswrapper[4797]: I0216 11:10:04.135070 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d11a28f3-fbb7-41dd-887a-66b7ddd99085","Type":"ContainerDied","Data":"d31e40accf961afcbcee30a77507c74a1d82440b444174033bdd6164d4cb0d74"} Feb 16 11:10:04 crc kubenswrapper[4797]: I0216 11:10:04.613322 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:10:04 crc kubenswrapper[4797]: I0216 11:10:04.671051 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:10:04 crc kubenswrapper[4797]: I0216 11:10:04.857906 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:10:04 crc kubenswrapper[4797]: I0216 11:10:04.902502 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:10:04 crc kubenswrapper[4797]: I0216 11:10:04.907036 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:10:05 crc kubenswrapper[4797]: I0216 11:10:05.451818 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 11:10:05 crc kubenswrapper[4797]: I0216 11:10:05.530964 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d11a28f3-fbb7-41dd-887a-66b7ddd99085-kubelet-dir\") pod \"d11a28f3-fbb7-41dd-887a-66b7ddd99085\" (UID: \"d11a28f3-fbb7-41dd-887a-66b7ddd99085\") " Feb 16 11:10:05 crc kubenswrapper[4797]: I0216 11:10:05.531059 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d11a28f3-fbb7-41dd-887a-66b7ddd99085-kube-api-access\") pod \"d11a28f3-fbb7-41dd-887a-66b7ddd99085\" (UID: \"d11a28f3-fbb7-41dd-887a-66b7ddd99085\") " Feb 16 11:10:05 crc kubenswrapper[4797]: I0216 11:10:05.531225 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d11a28f3-fbb7-41dd-887a-66b7ddd99085-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d11a28f3-fbb7-41dd-887a-66b7ddd99085" (UID: "d11a28f3-fbb7-41dd-887a-66b7ddd99085"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:10:05 crc kubenswrapper[4797]: I0216 11:10:05.531491 4797 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d11a28f3-fbb7-41dd-887a-66b7ddd99085-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:05 crc kubenswrapper[4797]: I0216 11:10:05.537432 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11a28f3-fbb7-41dd-887a-66b7ddd99085-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d11a28f3-fbb7-41dd-887a-66b7ddd99085" (UID: "d11a28f3-fbb7-41dd-887a-66b7ddd99085"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:10:05 crc kubenswrapper[4797]: I0216 11:10:05.632702 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d11a28f3-fbb7-41dd-887a-66b7ddd99085-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:06 crc kubenswrapper[4797]: I0216 11:10:06.149665 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"d11a28f3-fbb7-41dd-887a-66b7ddd99085","Type":"ContainerDied","Data":"1ed0393ff334b43d80df25195a62d3d9e08b1c2cd25305e6f704d701762c818a"} Feb 16 11:10:06 crc kubenswrapper[4797]: I0216 11:10:06.149821 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ed0393ff334b43d80df25195a62d3d9e08b1c2cd25305e6f704d701762c818a" Feb 16 11:10:06 crc kubenswrapper[4797]: I0216 11:10:06.149765 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 11:10:06 crc kubenswrapper[4797]: I0216 11:10:06.829205 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:10:07 crc kubenswrapper[4797]: I0216 11:10:07.763027 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:10:07 crc kubenswrapper[4797]: I0216 11:10:07.818189 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.090317 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 11:10:08 crc kubenswrapper[4797]: E0216 11:10:08.090555 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11a28f3-fbb7-41dd-887a-66b7ddd99085" containerName="pruner" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.090567 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11a28f3-fbb7-41dd-887a-66b7ddd99085" containerName="pruner" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.090690 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11a28f3-fbb7-41dd-887a-66b7ddd99085" containerName="pruner" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.091059 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.094307 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.096518 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.105266 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.112752 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.163587 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.163621 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-var-lock\") pod \"installer-9-crc\" (UID: \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.163810 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-kube-api-access\") pod \"installer-9-crc\" (UID: \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.164072 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.264672 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-var-lock\") pod \"installer-9-crc\" (UID: \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.264744 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-var-lock\") pod \"installer-9-crc\" (UID: \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.264766 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-kube-api-access\") pod \"installer-9-crc\" (UID: \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.264849 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.265057 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.289804 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-kube-api-access\") pod \"installer-9-crc\" (UID: \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.423216 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.692161 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nrhk"] Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.692992 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5nrhk" podUID="d8ffede4-813c-406a-9590-79f745ef4283" containerName="registry-server" containerID="cri-o://5e58c14d8651bb36543f81e0db1b8cb2b8968c7b31900dba03938199154b18b7" gracePeriod=2 Feb 16 11:10:08 crc kubenswrapper[4797]: I0216 11:10:08.843959 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 11:10:08 crc kubenswrapper[4797]: W0216 11:10:08.854959 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3c82693e_fd79_4a2c_97d5_ef5facb4fe8d.slice/crio-025b10746c9e4654f5fbc4ede060a8ffbcd1682fbb1148727453be183585d1d8 WatchSource:0}: Error finding container 025b10746c9e4654f5fbc4ede060a8ffbcd1682fbb1148727453be183585d1d8: Status 404 returned error can't find the container with id 025b10746c9e4654f5fbc4ede060a8ffbcd1682fbb1148727453be183585d1d8 Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.032418 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.165138 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d","Type":"ContainerStarted","Data":"57987d960ee8539446f25623f0275218162871303bbbca93262ac8f1f77fdb04"} Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.165182 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d","Type":"ContainerStarted","Data":"025b10746c9e4654f5fbc4ede060a8ffbcd1682fbb1148727453be183585d1d8"} Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.167128 4797 generic.go:334] "Generic (PLEG): container finished" podID="d8ffede4-813c-406a-9590-79f745ef4283" containerID="5e58c14d8651bb36543f81e0db1b8cb2b8968c7b31900dba03938199154b18b7" exitCode=0 Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.167164 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nrhk" event={"ID":"d8ffede4-813c-406a-9590-79f745ef4283","Type":"ContainerDied","Data":"5e58c14d8651bb36543f81e0db1b8cb2b8968c7b31900dba03938199154b18b7"} Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.167183 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5nrhk" event={"ID":"d8ffede4-813c-406a-9590-79f745ef4283","Type":"ContainerDied","Data":"43e4fac9a1efd89a688beafc2adb1b030a13fd5b3af61ffe78a814f4602dbd48"} Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.167202 4797 scope.go:117] "RemoveContainer" containerID="5e58c14d8651bb36543f81e0db1b8cb2b8968c7b31900dba03938199154b18b7" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.167229 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5nrhk" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.178244 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ppl7\" (UniqueName: \"kubernetes.io/projected/d8ffede4-813c-406a-9590-79f745ef4283-kube-api-access-7ppl7\") pod \"d8ffede4-813c-406a-9590-79f745ef4283\" (UID: \"d8ffede4-813c-406a-9590-79f745ef4283\") " Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.178312 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8ffede4-813c-406a-9590-79f745ef4283-utilities\") pod \"d8ffede4-813c-406a-9590-79f745ef4283\" (UID: \"d8ffede4-813c-406a-9590-79f745ef4283\") " Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.178401 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8ffede4-813c-406a-9590-79f745ef4283-catalog-content\") pod \"d8ffede4-813c-406a-9590-79f745ef4283\" (UID: \"d8ffede4-813c-406a-9590-79f745ef4283\") " Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.186096 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8ffede4-813c-406a-9590-79f745ef4283-utilities" (OuterVolumeSpecName: "utilities") pod "d8ffede4-813c-406a-9590-79f745ef4283" (UID: "d8ffede4-813c-406a-9590-79f745ef4283"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.189803 4797 scope.go:117] "RemoveContainer" containerID="b4a7c17002aeb6e326cb086fd75b6a47cb73a5aa5ec23c036f47e4ab08709140" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.191038 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.191021172 podStartE2EDuration="1.191021172s" podCreationTimestamp="2026-02-16 11:10:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:10:09.186620421 +0000 UTC m=+203.906805391" watchObservedRunningTime="2026-02-16 11:10:09.191021172 +0000 UTC m=+203.911206142" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.201356 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8ffede4-813c-406a-9590-79f745ef4283-kube-api-access-7ppl7" (OuterVolumeSpecName: "kube-api-access-7ppl7") pod "d8ffede4-813c-406a-9590-79f745ef4283" (UID: "d8ffede4-813c-406a-9590-79f745ef4283"). InnerVolumeSpecName "kube-api-access-7ppl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.224686 4797 scope.go:117] "RemoveContainer" containerID="3e22e78e5b64aee72d4cb93c29c2c602fa20da7c67533486ded8902f594cbf80" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.242258 4797 scope.go:117] "RemoveContainer" containerID="5e58c14d8651bb36543f81e0db1b8cb2b8968c7b31900dba03938199154b18b7" Feb 16 11:10:09 crc kubenswrapper[4797]: E0216 11:10:09.242786 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e58c14d8651bb36543f81e0db1b8cb2b8968c7b31900dba03938199154b18b7\": container with ID starting with 5e58c14d8651bb36543f81e0db1b8cb2b8968c7b31900dba03938199154b18b7 not found: ID does not exist" containerID="5e58c14d8651bb36543f81e0db1b8cb2b8968c7b31900dba03938199154b18b7" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.242827 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e58c14d8651bb36543f81e0db1b8cb2b8968c7b31900dba03938199154b18b7"} err="failed to get container status \"5e58c14d8651bb36543f81e0db1b8cb2b8968c7b31900dba03938199154b18b7\": rpc error: code = NotFound desc = could not find container \"5e58c14d8651bb36543f81e0db1b8cb2b8968c7b31900dba03938199154b18b7\": container with ID starting with 5e58c14d8651bb36543f81e0db1b8cb2b8968c7b31900dba03938199154b18b7 not found: ID does not exist" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.242856 4797 scope.go:117] "RemoveContainer" containerID="b4a7c17002aeb6e326cb086fd75b6a47cb73a5aa5ec23c036f47e4ab08709140" Feb 16 11:10:09 crc kubenswrapper[4797]: E0216 11:10:09.243171 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4a7c17002aeb6e326cb086fd75b6a47cb73a5aa5ec23c036f47e4ab08709140\": container with ID starting with b4a7c17002aeb6e326cb086fd75b6a47cb73a5aa5ec23c036f47e4ab08709140 not found: ID does not exist" containerID="b4a7c17002aeb6e326cb086fd75b6a47cb73a5aa5ec23c036f47e4ab08709140" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.243255 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a7c17002aeb6e326cb086fd75b6a47cb73a5aa5ec23c036f47e4ab08709140"} err="failed to get container status \"b4a7c17002aeb6e326cb086fd75b6a47cb73a5aa5ec23c036f47e4ab08709140\": rpc error: code = NotFound desc = could not find container \"b4a7c17002aeb6e326cb086fd75b6a47cb73a5aa5ec23c036f47e4ab08709140\": container with ID starting with b4a7c17002aeb6e326cb086fd75b6a47cb73a5aa5ec23c036f47e4ab08709140 not found: ID does not exist" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.243319 4797 scope.go:117] "RemoveContainer" containerID="3e22e78e5b64aee72d4cb93c29c2c602fa20da7c67533486ded8902f594cbf80" Feb 16 11:10:09 crc kubenswrapper[4797]: E0216 11:10:09.243770 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e22e78e5b64aee72d4cb93c29c2c602fa20da7c67533486ded8902f594cbf80\": container with ID starting with 3e22e78e5b64aee72d4cb93c29c2c602fa20da7c67533486ded8902f594cbf80 not found: ID does not exist" containerID="3e22e78e5b64aee72d4cb93c29c2c602fa20da7c67533486ded8902f594cbf80" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.243869 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e22e78e5b64aee72d4cb93c29c2c602fa20da7c67533486ded8902f594cbf80"} err="failed to get container status \"3e22e78e5b64aee72d4cb93c29c2c602fa20da7c67533486ded8902f594cbf80\": rpc error: code = NotFound desc = could not find container \"3e22e78e5b64aee72d4cb93c29c2c602fa20da7c67533486ded8902f594cbf80\": container with ID starting with 3e22e78e5b64aee72d4cb93c29c2c602fa20da7c67533486ded8902f594cbf80 not found: ID does not exist" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.252454 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8ffede4-813c-406a-9590-79f745ef4283-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8ffede4-813c-406a-9590-79f745ef4283" (UID: "d8ffede4-813c-406a-9590-79f745ef4283"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.280532 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8ffede4-813c-406a-9590-79f745ef4283-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.280857 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ppl7\" (UniqueName: \"kubernetes.io/projected/d8ffede4-813c-406a-9590-79f745ef4283-kube-api-access-7ppl7\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.280997 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8ffede4-813c-406a-9590-79f745ef4283-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.499712 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5nrhk"] Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.503440 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5nrhk"] Feb 16 11:10:09 crc kubenswrapper[4797]: I0216 11:10:09.990634 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8ffede4-813c-406a-9590-79f745ef4283" path="/var/lib/kubelet/pods/d8ffede4-813c-406a-9590-79f745ef4283/volumes" Feb 16 11:10:10 crc kubenswrapper[4797]: I0216 11:10:10.892184 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttw8s"] Feb 16 11:10:10 crc kubenswrapper[4797]: I0216 11:10:10.892496 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ttw8s" podUID="4e1f1272-6892-4bc2-be85-30b6d08df6ec" containerName="registry-server" containerID="cri-o://58aebc180624bc0a0bf39def8fd97de377fe3b40b52a2ebfeea29d691d28160a" gracePeriod=2 Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.094758 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ncvjt"] Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.095701 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ncvjt" podUID="14c7e742-337d-443c-a3e1-057300517c25" containerName="registry-server" containerID="cri-o://9935ca864be4ac2a433a1bda6bb0ac42758f2188a5fcd01d3c68d0d987986e69" gracePeriod=2 Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.182768 4797 generic.go:334] "Generic (PLEG): container finished" podID="4e1f1272-6892-4bc2-be85-30b6d08df6ec" containerID="58aebc180624bc0a0bf39def8fd97de377fe3b40b52a2ebfeea29d691d28160a" exitCode=0 Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.182819 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttw8s" event={"ID":"4e1f1272-6892-4bc2-be85-30b6d08df6ec","Type":"ContainerDied","Data":"58aebc180624bc0a0bf39def8fd97de377fe3b40b52a2ebfeea29d691d28160a"} Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.248628 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.406730 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1f1272-6892-4bc2-be85-30b6d08df6ec-utilities\") pod \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\" (UID: \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\") " Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.406872 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfzt8\" (UniqueName: \"kubernetes.io/projected/4e1f1272-6892-4bc2-be85-30b6d08df6ec-kube-api-access-gfzt8\") pod \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\" (UID: \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\") " Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.407437 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1f1272-6892-4bc2-be85-30b6d08df6ec-utilities" (OuterVolumeSpecName: "utilities") pod "4e1f1272-6892-4bc2-be85-30b6d08df6ec" (UID: "4e1f1272-6892-4bc2-be85-30b6d08df6ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.407779 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1f1272-6892-4bc2-be85-30b6d08df6ec-catalog-content\") pod \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\" (UID: \"4e1f1272-6892-4bc2-be85-30b6d08df6ec\") " Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.408940 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e1f1272-6892-4bc2-be85-30b6d08df6ec-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.412939 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1f1272-6892-4bc2-be85-30b6d08df6ec-kube-api-access-gfzt8" (OuterVolumeSpecName: "kube-api-access-gfzt8") pod "4e1f1272-6892-4bc2-be85-30b6d08df6ec" (UID: "4e1f1272-6892-4bc2-be85-30b6d08df6ec"). InnerVolumeSpecName "kube-api-access-gfzt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.443562 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e1f1272-6892-4bc2-be85-30b6d08df6ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e1f1272-6892-4bc2-be85-30b6d08df6ec" (UID: "4e1f1272-6892-4bc2-be85-30b6d08df6ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.472919 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.512485 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfzt8\" (UniqueName: \"kubernetes.io/projected/4e1f1272-6892-4bc2-be85-30b6d08df6ec-kube-api-access-gfzt8\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.512538 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e1f1272-6892-4bc2-be85-30b6d08df6ec-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.613919 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbqxn\" (UniqueName: \"kubernetes.io/projected/14c7e742-337d-443c-a3e1-057300517c25-kube-api-access-hbqxn\") pod \"14c7e742-337d-443c-a3e1-057300517c25\" (UID: \"14c7e742-337d-443c-a3e1-057300517c25\") " Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.614033 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c7e742-337d-443c-a3e1-057300517c25-utilities\") pod \"14c7e742-337d-443c-a3e1-057300517c25\" (UID: \"14c7e742-337d-443c-a3e1-057300517c25\") " Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.614116 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c7e742-337d-443c-a3e1-057300517c25-catalog-content\") pod \"14c7e742-337d-443c-a3e1-057300517c25\" (UID: \"14c7e742-337d-443c-a3e1-057300517c25\") " Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.614848 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c7e742-337d-443c-a3e1-057300517c25-utilities" (OuterVolumeSpecName: "utilities") pod "14c7e742-337d-443c-a3e1-057300517c25" (UID: "14c7e742-337d-443c-a3e1-057300517c25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.616711 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14c7e742-337d-443c-a3e1-057300517c25-kube-api-access-hbqxn" (OuterVolumeSpecName: "kube-api-access-hbqxn") pod "14c7e742-337d-443c-a3e1-057300517c25" (UID: "14c7e742-337d-443c-a3e1-057300517c25"). InnerVolumeSpecName "kube-api-access-hbqxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.708123 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.708186 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.708231 4797 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.708890 4797 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5"} pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.708957 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" containerID="cri-o://ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5" gracePeriod=600 Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.719101 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14c7e742-337d-443c-a3e1-057300517c25-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.719143 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbqxn\" (UniqueName: \"kubernetes.io/projected/14c7e742-337d-443c-a3e1-057300517c25-kube-api-access-hbqxn\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.739465 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14c7e742-337d-443c-a3e1-057300517c25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14c7e742-337d-443c-a3e1-057300517c25" (UID: "14c7e742-337d-443c-a3e1-057300517c25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:10:11 crc kubenswrapper[4797]: I0216 11:10:11.820392 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14c7e742-337d-443c-a3e1-057300517c25-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.189793 4797 generic.go:334] "Generic (PLEG): container finished" podID="14c7e742-337d-443c-a3e1-057300517c25" containerID="9935ca864be4ac2a433a1bda6bb0ac42758f2188a5fcd01d3c68d0d987986e69" exitCode=0 Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.190024 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncvjt" event={"ID":"14c7e742-337d-443c-a3e1-057300517c25","Type":"ContainerDied","Data":"9935ca864be4ac2a433a1bda6bb0ac42758f2188a5fcd01d3c68d0d987986e69"} Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.190161 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ncvjt" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.190186 4797 scope.go:117] "RemoveContainer" containerID="9935ca864be4ac2a433a1bda6bb0ac42758f2188a5fcd01d3c68d0d987986e69" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.190170 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ncvjt" event={"ID":"14c7e742-337d-443c-a3e1-057300517c25","Type":"ContainerDied","Data":"4132a3de5a687256725d9ced49a90b9b11e11d5bbe5db8a056423e7a25a5dcc9"} Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.193771 4797 generic.go:334] "Generic (PLEG): container finished" podID="128f4e85-fd17-4281-97d2-872fda792b21" containerID="ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5" exitCode=0 Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.193841 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerDied","Data":"ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5"} Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.194024 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"af84a89245e5aaf7fe1b2839496582f7da8d713bc2c59a78f68c5a3db5e3f13c"} Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.198756 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ttw8s" event={"ID":"4e1f1272-6892-4bc2-be85-30b6d08df6ec","Type":"ContainerDied","Data":"586febb0653ed3136ed8ea9a9fafd1b9b8b7a3513f445ac5290db9a231d877fc"} Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.198869 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ttw8s" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.208445 4797 scope.go:117] "RemoveContainer" containerID="d742513b018e9e2cf9a98c3e0ded2b8b07def0f3cc82d9abfea60b21f67e0ace" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.245906 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ncvjt"] Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.246861 4797 scope.go:117] "RemoveContainer" containerID="3025ca615e542f24122389913825a81809db8d38c5950f3b93e89f50032bb6f4" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.248473 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ncvjt"] Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.264932 4797 scope.go:117] "RemoveContainer" containerID="9935ca864be4ac2a433a1bda6bb0ac42758f2188a5fcd01d3c68d0d987986e69" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.265038 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttw8s"] Feb 16 11:10:12 crc kubenswrapper[4797]: E0216 11:10:12.265331 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9935ca864be4ac2a433a1bda6bb0ac42758f2188a5fcd01d3c68d0d987986e69\": container with ID starting with 9935ca864be4ac2a433a1bda6bb0ac42758f2188a5fcd01d3c68d0d987986e69 not found: ID does not exist" containerID="9935ca864be4ac2a433a1bda6bb0ac42758f2188a5fcd01d3c68d0d987986e69" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.265354 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9935ca864be4ac2a433a1bda6bb0ac42758f2188a5fcd01d3c68d0d987986e69"} err="failed to get container status \"9935ca864be4ac2a433a1bda6bb0ac42758f2188a5fcd01d3c68d0d987986e69\": rpc error: code = NotFound desc = could not find container \"9935ca864be4ac2a433a1bda6bb0ac42758f2188a5fcd01d3c68d0d987986e69\": container with ID starting with 9935ca864be4ac2a433a1bda6bb0ac42758f2188a5fcd01d3c68d0d987986e69 not found: ID does not exist" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.265374 4797 scope.go:117] "RemoveContainer" containerID="d742513b018e9e2cf9a98c3e0ded2b8b07def0f3cc82d9abfea60b21f67e0ace" Feb 16 11:10:12 crc kubenswrapper[4797]: E0216 11:10:12.265600 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d742513b018e9e2cf9a98c3e0ded2b8b07def0f3cc82d9abfea60b21f67e0ace\": container with ID starting with d742513b018e9e2cf9a98c3e0ded2b8b07def0f3cc82d9abfea60b21f67e0ace not found: ID does not exist" containerID="d742513b018e9e2cf9a98c3e0ded2b8b07def0f3cc82d9abfea60b21f67e0ace" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.265615 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d742513b018e9e2cf9a98c3e0ded2b8b07def0f3cc82d9abfea60b21f67e0ace"} err="failed to get container status \"d742513b018e9e2cf9a98c3e0ded2b8b07def0f3cc82d9abfea60b21f67e0ace\": rpc error: code = NotFound desc = could not find container \"d742513b018e9e2cf9a98c3e0ded2b8b07def0f3cc82d9abfea60b21f67e0ace\": container with ID starting with d742513b018e9e2cf9a98c3e0ded2b8b07def0f3cc82d9abfea60b21f67e0ace not found: ID does not exist" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.265627 4797 scope.go:117] "RemoveContainer" containerID="3025ca615e542f24122389913825a81809db8d38c5950f3b93e89f50032bb6f4" Feb 16 11:10:12 crc kubenswrapper[4797]: E0216 11:10:12.265826 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3025ca615e542f24122389913825a81809db8d38c5950f3b93e89f50032bb6f4\": container with ID starting with 3025ca615e542f24122389913825a81809db8d38c5950f3b93e89f50032bb6f4 not found: ID does not exist" containerID="3025ca615e542f24122389913825a81809db8d38c5950f3b93e89f50032bb6f4" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.265841 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3025ca615e542f24122389913825a81809db8d38c5950f3b93e89f50032bb6f4"} err="failed to get container status \"3025ca615e542f24122389913825a81809db8d38c5950f3b93e89f50032bb6f4\": rpc error: code = NotFound desc = could not find container \"3025ca615e542f24122389913825a81809db8d38c5950f3b93e89f50032bb6f4\": container with ID starting with 3025ca615e542f24122389913825a81809db8d38c5950f3b93e89f50032bb6f4 not found: ID does not exist" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.265852 4797 scope.go:117] "RemoveContainer" containerID="58aebc180624bc0a0bf39def8fd97de377fe3b40b52a2ebfeea29d691d28160a" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.266648 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ttw8s"] Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.284638 4797 scope.go:117] "RemoveContainer" containerID="04434630e6b06d66377abb41f9936e01e2397d749bfaa1b95ceb06db15b161cb" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.298210 4797 scope.go:117] "RemoveContainer" containerID="38f9e492ece8ee387e940e07eea2fbae4395454b8909033943745da0aadbca3a" Feb 16 11:10:12 crc kubenswrapper[4797]: I0216 11:10:12.775354 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-nj877" podUID="ad05eae6-52a0-4044-a080-06cb3ebc5a04" containerName="oauth-openshift" containerID="cri-o://b03b96399e563f84cb7a4b86f487c13d5c7654b2940bdc783ef16e5fd29a25a2" gracePeriod=15 Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.160974 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.210038 4797 generic.go:334] "Generic (PLEG): container finished" podID="ad05eae6-52a0-4044-a080-06cb3ebc5a04" containerID="b03b96399e563f84cb7a4b86f487c13d5c7654b2940bdc783ef16e5fd29a25a2" exitCode=0 Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.210650 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nj877" event={"ID":"ad05eae6-52a0-4044-a080-06cb3ebc5a04","Type":"ContainerDied","Data":"b03b96399e563f84cb7a4b86f487c13d5c7654b2940bdc783ef16e5fd29a25a2"} Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.210691 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-nj877" event={"ID":"ad05eae6-52a0-4044-a080-06cb3ebc5a04","Type":"ContainerDied","Data":"99cfed1bb60762ef56aca77fd98c91256f2e6e9dffcd5e14fee280ab872edb93"} Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.210717 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-nj877" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.210780 4797 scope.go:117] "RemoveContainer" containerID="b03b96399e563f84cb7a4b86f487c13d5c7654b2940bdc783ef16e5fd29a25a2" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.228638 4797 scope.go:117] "RemoveContainer" containerID="b03b96399e563f84cb7a4b86f487c13d5c7654b2940bdc783ef16e5fd29a25a2" Feb 16 11:10:13 crc kubenswrapper[4797]: E0216 11:10:13.229198 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b03b96399e563f84cb7a4b86f487c13d5c7654b2940bdc783ef16e5fd29a25a2\": container with ID starting with b03b96399e563f84cb7a4b86f487c13d5c7654b2940bdc783ef16e5fd29a25a2 not found: ID does not exist" containerID="b03b96399e563f84cb7a4b86f487c13d5c7654b2940bdc783ef16e5fd29a25a2" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.229248 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b03b96399e563f84cb7a4b86f487c13d5c7654b2940bdc783ef16e5fd29a25a2"} err="failed to get container status \"b03b96399e563f84cb7a4b86f487c13d5c7654b2940bdc783ef16e5fd29a25a2\": rpc error: code = NotFound desc = could not find container \"b03b96399e563f84cb7a4b86f487c13d5c7654b2940bdc783ef16e5fd29a25a2\": container with ID starting with b03b96399e563f84cb7a4b86f487c13d5c7654b2940bdc783ef16e5fd29a25a2 not found: ID does not exist" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238005 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-idp-0-file-data\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238081 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-provider-selection\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238102 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-error\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238119 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-session\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238142 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-cliconfig\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238168 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ad05eae6-52a0-4044-a080-06cb3ebc5a04-audit-dir\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238198 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-router-certs\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238229 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-service-ca\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238261 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-login\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238284 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-ocp-branding-template\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238311 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-trusted-ca-bundle\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238330 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-audit-policies\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238347 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-serving-cert\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.238366 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpzg8\" (UniqueName: \"kubernetes.io/projected/ad05eae6-52a0-4044-a080-06cb3ebc5a04-kube-api-access-mpzg8\") pod \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\" (UID: \"ad05eae6-52a0-4044-a080-06cb3ebc5a04\") " Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.239281 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad05eae6-52a0-4044-a080-06cb3ebc5a04-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.240067 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.240477 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.240835 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.242040 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.244667 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.245016 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.245248 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad05eae6-52a0-4044-a080-06cb3ebc5a04-kube-api-access-mpzg8" (OuterVolumeSpecName: "kube-api-access-mpzg8") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "kube-api-access-mpzg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.245510 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.249927 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.250365 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.250897 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.252974 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.253116 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ad05eae6-52a0-4044-a080-06cb3ebc5a04" (UID: "ad05eae6-52a0-4044-a080-06cb3ebc5a04"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340078 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340114 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340129 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340143 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340156 4797 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340166 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340178 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpzg8\" (UniqueName: \"kubernetes.io/projected/ad05eae6-52a0-4044-a080-06cb3ebc5a04-kube-api-access-mpzg8\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340189 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340198 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340300 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340430 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340442 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340452 4797 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ad05eae6-52a0-4044-a080-06cb3ebc5a04-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.340462 4797 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ad05eae6-52a0-4044-a080-06cb3ebc5a04-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.537627 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nj877"] Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.545000 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-nj877"] Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.991532 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14c7e742-337d-443c-a3e1-057300517c25" path="/var/lib/kubelet/pods/14c7e742-337d-443c-a3e1-057300517c25/volumes" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.992555 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1f1272-6892-4bc2-be85-30b6d08df6ec" path="/var/lib/kubelet/pods/4e1f1272-6892-4bc2-be85-30b6d08df6ec/volumes" Feb 16 11:10:13 crc kubenswrapper[4797]: I0216 11:10:13.993376 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad05eae6-52a0-4044-a080-06cb3ebc5a04" path="/var/lib/kubelet/pods/ad05eae6-52a0-4044-a080-06cb3ebc5a04/volumes" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.431166 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99"] Feb 16 11:10:18 crc kubenswrapper[4797]: E0216 11:10:18.432168 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ffede4-813c-406a-9590-79f745ef4283" containerName="extract-utilities" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432190 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ffede4-813c-406a-9590-79f745ef4283" containerName="extract-utilities" Feb 16 11:10:18 crc kubenswrapper[4797]: E0216 11:10:18.432216 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ffede4-813c-406a-9590-79f745ef4283" containerName="extract-content" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432229 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ffede4-813c-406a-9590-79f745ef4283" containerName="extract-content" Feb 16 11:10:18 crc kubenswrapper[4797]: E0216 11:10:18.432244 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c7e742-337d-443c-a3e1-057300517c25" containerName="extract-utilities" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432260 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c7e742-337d-443c-a3e1-057300517c25" containerName="extract-utilities" Feb 16 11:10:18 crc kubenswrapper[4797]: E0216 11:10:18.432289 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1f1272-6892-4bc2-be85-30b6d08df6ec" containerName="registry-server" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432306 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1f1272-6892-4bc2-be85-30b6d08df6ec" containerName="registry-server" Feb 16 11:10:18 crc kubenswrapper[4797]: E0216 11:10:18.432326 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8ffede4-813c-406a-9590-79f745ef4283" containerName="registry-server" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432341 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8ffede4-813c-406a-9590-79f745ef4283" containerName="registry-server" Feb 16 11:10:18 crc kubenswrapper[4797]: E0216 11:10:18.432372 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1f1272-6892-4bc2-be85-30b6d08df6ec" containerName="extract-utilities" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432384 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1f1272-6892-4bc2-be85-30b6d08df6ec" containerName="extract-utilities" Feb 16 11:10:18 crc kubenswrapper[4797]: E0216 11:10:18.432404 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c7e742-337d-443c-a3e1-057300517c25" containerName="extract-content" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432417 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c7e742-337d-443c-a3e1-057300517c25" containerName="extract-content" Feb 16 11:10:18 crc kubenswrapper[4797]: E0216 11:10:18.432433 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1f1272-6892-4bc2-be85-30b6d08df6ec" containerName="extract-content" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432448 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1f1272-6892-4bc2-be85-30b6d08df6ec" containerName="extract-content" Feb 16 11:10:18 crc kubenswrapper[4797]: E0216 11:10:18.432471 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14c7e742-337d-443c-a3e1-057300517c25" containerName="registry-server" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432488 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="14c7e742-337d-443c-a3e1-057300517c25" containerName="registry-server" Feb 16 11:10:18 crc kubenswrapper[4797]: E0216 11:10:18.432504 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad05eae6-52a0-4044-a080-06cb3ebc5a04" containerName="oauth-openshift" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432520 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad05eae6-52a0-4044-a080-06cb3ebc5a04" containerName="oauth-openshift" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432821 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1f1272-6892-4bc2-be85-30b6d08df6ec" containerName="registry-server" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432843 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="14c7e742-337d-443c-a3e1-057300517c25" containerName="registry-server" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432891 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad05eae6-52a0-4044-a080-06cb3ebc5a04" containerName="oauth-openshift" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.432906 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8ffede4-813c-406a-9590-79f745ef4283" containerName="registry-server" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.433541 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.436637 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.437553 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.437815 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.437829 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.438168 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.438236 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.438742 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.438839 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.439572 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.439845 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.440057 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.440468 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.453811 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99"] Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.454878 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.456957 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.465842 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.508243 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-user-template-login\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.508552 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdchd\" (UniqueName: \"kubernetes.io/projected/382fa731-733b-42bf-966b-058b40e0e89d-kube-api-access-rdchd\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.508743 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/382fa731-733b-42bf-966b-058b40e0e89d-audit-dir\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.508863 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/382fa731-733b-42bf-966b-058b40e0e89d-audit-policies\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.508975 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.509120 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.509208 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.509298 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.509401 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.509501 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.509666 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.509768 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-session\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.509909 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.510000 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-user-template-error\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.611692 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.611933 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-session\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.612065 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-user-template-error\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.612163 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.612257 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-user-template-login\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.612349 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdchd\" (UniqueName: \"kubernetes.io/projected/382fa731-733b-42bf-966b-058b40e0e89d-kube-api-access-rdchd\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.612441 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/382fa731-733b-42bf-966b-058b40e0e89d-audit-dir\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.612527 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/382fa731-733b-42bf-966b-058b40e0e89d-audit-policies\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.612690 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.612828 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.612691 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/382fa731-733b-42bf-966b-058b40e0e89d-audit-dir\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.612908 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.613046 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.613129 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.613201 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.613245 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.613564 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/382fa731-733b-42bf-966b-058b40e0e89d-audit-policies\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.613901 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.614217 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.618168 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.618502 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-user-template-login\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.618649 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.619201 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-user-template-error\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.619525 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.619723 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.620549 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-session\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.628906 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/382fa731-733b-42bf-966b-058b40e0e89d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.635628 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdchd\" (UniqueName: \"kubernetes.io/projected/382fa731-733b-42bf-966b-058b40e0e89d-kube-api-access-rdchd\") pod \"oauth-openshift-64f4b9bb7f-w8v99\" (UID: \"382fa731-733b-42bf-966b-058b40e0e89d\") " pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:18 crc kubenswrapper[4797]: I0216 11:10:18.769303 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:19 crc kubenswrapper[4797]: I0216 11:10:19.001528 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99"] Feb 16 11:10:19 crc kubenswrapper[4797]: I0216 11:10:19.245890 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" event={"ID":"382fa731-733b-42bf-966b-058b40e0e89d","Type":"ContainerStarted","Data":"b0e4972cec77e1ada06fc5d2bd86827a7ef09617a87d5ce3f3d89d2c9ff00bff"} Feb 16 11:10:19 crc kubenswrapper[4797]: I0216 11:10:19.246238 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" event={"ID":"382fa731-733b-42bf-966b-058b40e0e89d","Type":"ContainerStarted","Data":"a7d02aba4de46e5705a84b42aed9abf01aed4ecb70c4303ae554e9c16e8e5178"} Feb 16 11:10:20 crc kubenswrapper[4797]: I0216 11:10:20.254318 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:20 crc kubenswrapper[4797]: I0216 11:10:20.264806 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" Feb 16 11:10:20 crc kubenswrapper[4797]: I0216 11:10:20.293045 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-64f4b9bb7f-w8v99" podStartSLOduration=33.293019657 podStartE2EDuration="33.293019657s" podCreationTimestamp="2026-02-16 11:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:10:20.29114033 +0000 UTC m=+215.011325320" watchObservedRunningTime="2026-02-16 11:10:20.293019657 +0000 UTC m=+215.013204647" Feb 16 11:10:46 crc kubenswrapper[4797]: I0216 11:10:46.992365 4797 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 11:10:46 crc kubenswrapper[4797]: E0216 11:10:46.992968 4797 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Feb 16 11:10:46 crc kubenswrapper[4797]: I0216 11:10:46.993416 4797 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 11:10:46 crc kubenswrapper[4797]: I0216 11:10:46.993598 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:46 crc kubenswrapper[4797]: I0216 11:10:46.993713 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38" gracePeriod=15 Feb 16 11:10:46 crc kubenswrapper[4797]: I0216 11:10:46.993728 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803" gracePeriod=15 Feb 16 11:10:46 crc kubenswrapper[4797]: I0216 11:10:46.993800 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057" gracePeriod=15 Feb 16 11:10:46 crc kubenswrapper[4797]: I0216 11:10:46.993836 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52" gracePeriod=15 Feb 16 11:10:46 crc kubenswrapper[4797]: I0216 11:10:46.993841 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485" gracePeriod=15 Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.000250 4797 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.000795 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.000812 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.000843 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.000852 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.000862 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.000869 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.000878 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.000885 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.000895 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.000931 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.000943 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.000950 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.000960 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.000967 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.001198 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.001209 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.001222 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.001250 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.001261 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.001269 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.001284 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.001532 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.001543 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.025555 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.025620 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.025642 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.025662 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.025710 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.025739 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.025804 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.025845 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.126909 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127227 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127249 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127264 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127280 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127304 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127328 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127363 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127412 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127030 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127453 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127473 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127494 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127515 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127535 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.127554 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.421829 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.423812 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.426615 4797 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803" exitCode=0 Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.426666 4797 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057" exitCode=0 Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.426688 4797 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52" exitCode=0 Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.426707 4797 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485" exitCode=2 Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.426821 4797 scope.go:117] "RemoveContainer" containerID="cb1cbc6028e8e71bf4ea67e2a709d32a66707710a2ceb9345076318991849cb1" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.435786 4797 generic.go:334] "Generic (PLEG): container finished" podID="3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" containerID="57987d960ee8539446f25623f0275218162871303bbbca93262ac8f1f77fdb04" exitCode=0 Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.435837 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d","Type":"ContainerDied","Data":"57987d960ee8539446f25623f0275218162871303bbbca93262ac8f1f77fdb04"} Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.436778 4797 status_manager.go:851] "Failed to get status for pod" podUID="3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.437185 4797 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.717801 4797 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.718535 4797 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.719108 4797 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.719543 4797 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.720038 4797 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:47 crc kubenswrapper[4797]: I0216 11:10:47.720088 4797 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.720509 4797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="200ms" Feb 16 11:10:47 crc kubenswrapper[4797]: E0216 11:10:47.922437 4797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="400ms" Feb 16 11:10:48 crc kubenswrapper[4797]: E0216 11:10:48.323791 4797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="800ms" Feb 16 11:10:48 crc kubenswrapper[4797]: I0216 11:10:48.443530 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 11:10:48 crc kubenswrapper[4797]: I0216 11:10:48.701120 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 11:10:48 crc kubenswrapper[4797]: I0216 11:10:48.702110 4797 status_manager.go:851] "Failed to get status for pod" podUID="3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:48 crc kubenswrapper[4797]: I0216 11:10:48.755130 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-var-lock\") pod \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\" (UID: \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\") " Feb 16 11:10:48 crc kubenswrapper[4797]: I0216 11:10:48.755208 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-kube-api-access\") pod \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\" (UID: \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\") " Feb 16 11:10:48 crc kubenswrapper[4797]: I0216 11:10:48.755223 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-kubelet-dir\") pod \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\" (UID: \"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d\") " Feb 16 11:10:48 crc kubenswrapper[4797]: I0216 11:10:48.755267 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-var-lock" (OuterVolumeSpecName: "var-lock") pod "3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" (UID: "3c82693e-fd79-4a2c-97d5-ef5facb4fe8d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:10:48 crc kubenswrapper[4797]: I0216 11:10:48.755395 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" (UID: "3c82693e-fd79-4a2c-97d5-ef5facb4fe8d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:10:48 crc kubenswrapper[4797]: I0216 11:10:48.755515 4797 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:48 crc kubenswrapper[4797]: I0216 11:10:48.755527 4797 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:48 crc kubenswrapper[4797]: I0216 11:10:48.765917 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" (UID: "3c82693e-fd79-4a2c-97d5-ef5facb4fe8d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:10:48 crc kubenswrapper[4797]: I0216 11:10:48.856773 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3c82693e-fd79-4a2c-97d5-ef5facb4fe8d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:49 crc kubenswrapper[4797]: E0216 11:10:49.126446 4797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="1.6s" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.352707 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.353656 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.354230 4797 status_manager.go:851] "Failed to get status for pod" podUID="3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.354678 4797 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.365155 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.365218 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.365258 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.365439 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.365472 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.365487 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.453889 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.454791 4797 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38" exitCode=0 Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.454866 4797 scope.go:117] "RemoveContainer" containerID="c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.455133 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.456794 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3c82693e-fd79-4a2c-97d5-ef5facb4fe8d","Type":"ContainerDied","Data":"025b10746c9e4654f5fbc4ede060a8ffbcd1682fbb1148727453be183585d1d8"} Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.456879 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="025b10746c9e4654f5fbc4ede060a8ffbcd1682fbb1148727453be183585d1d8" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.456950 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.466211 4797 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.466259 4797 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.466273 4797 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.472899 4797 scope.go:117] "RemoveContainer" containerID="e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.481287 4797 status_manager.go:851] "Failed to get status for pod" podUID="3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.481831 4797 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.485313 4797 status_manager.go:851] "Failed to get status for pod" podUID="3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.485832 4797 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.491286 4797 scope.go:117] "RemoveContainer" containerID="4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.507478 4797 scope.go:117] "RemoveContainer" containerID="e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.523747 4797 scope.go:117] "RemoveContainer" containerID="4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.541234 4797 scope.go:117] "RemoveContainer" containerID="089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.566434 4797 scope.go:117] "RemoveContainer" containerID="c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803" Feb 16 11:10:49 crc kubenswrapper[4797]: E0216 11:10:49.567440 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\": container with ID starting with c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803 not found: ID does not exist" containerID="c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.567470 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803"} err="failed to get container status \"c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\": rpc error: code = NotFound desc = could not find container \"c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803\": container with ID starting with c583628f8f8b21cf4bfa1a315a85156b09a0f5b8f91b80d1a89f8efcd8558803 not found: ID does not exist" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.567493 4797 scope.go:117] "RemoveContainer" containerID="e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057" Feb 16 11:10:49 crc kubenswrapper[4797]: E0216 11:10:49.567899 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\": container with ID starting with e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057 not found: ID does not exist" containerID="e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.567920 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057"} err="failed to get container status \"e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\": rpc error: code = NotFound desc = could not find container \"e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057\": container with ID starting with e500838bf424c17c7a1781a56aec4039ddd5b4ebff97a747b1e7b2ae38071057 not found: ID does not exist" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.567931 4797 scope.go:117] "RemoveContainer" containerID="4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52" Feb 16 11:10:49 crc kubenswrapper[4797]: E0216 11:10:49.568313 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\": container with ID starting with 4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52 not found: ID does not exist" containerID="4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.568355 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52"} err="failed to get container status \"4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\": rpc error: code = NotFound desc = could not find container \"4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52\": container with ID starting with 4e24f15e03484cd4498345d7ca1347803c10cc6342485a3a221da5d5980f6e52 not found: ID does not exist" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.568379 4797 scope.go:117] "RemoveContainer" containerID="e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485" Feb 16 11:10:49 crc kubenswrapper[4797]: E0216 11:10:49.568794 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\": container with ID starting with e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485 not found: ID does not exist" containerID="e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.568816 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485"} err="failed to get container status \"e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\": rpc error: code = NotFound desc = could not find container \"e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485\": container with ID starting with e19072c509dca996915ecfaf33ce7c86b9e76cce14a0e10fbb46fff0a7b3e485 not found: ID does not exist" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.568827 4797 scope.go:117] "RemoveContainer" containerID="4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38" Feb 16 11:10:49 crc kubenswrapper[4797]: E0216 11:10:49.569083 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\": container with ID starting with 4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38 not found: ID does not exist" containerID="4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.569117 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38"} err="failed to get container status \"4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\": rpc error: code = NotFound desc = could not find container \"4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38\": container with ID starting with 4ab342b4e32bac74f180660bdd65cabeca45b14d5965ae39dd8179a8bf81db38 not found: ID does not exist" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.569140 4797 scope.go:117] "RemoveContainer" containerID="089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f" Feb 16 11:10:49 crc kubenswrapper[4797]: E0216 11:10:49.569471 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\": container with ID starting with 089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f not found: ID does not exist" containerID="089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.569496 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f"} err="failed to get container status \"089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\": rpc error: code = NotFound desc = could not find container \"089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f\": container with ID starting with 089b06b3dde0b3567ec092be5e57cbb1e62376167e5bbd0a306179f5b64e4e5f not found: ID does not exist" Feb 16 11:10:49 crc kubenswrapper[4797]: I0216 11:10:49.989278 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 11:10:50 crc kubenswrapper[4797]: E0216 11:10:50.727777 4797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="3.2s" Feb 16 11:10:52 crc kubenswrapper[4797]: E0216 11:10:52.037411 4797 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:52 crc kubenswrapper[4797]: I0216 11:10:52.038144 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:52 crc kubenswrapper[4797]: E0216 11:10:52.070355 4797 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.192:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894b59ff47e91ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 11:10:52.069810668 +0000 UTC m=+246.789995658,LastTimestamp:2026-02-16 11:10:52.069810668 +0000 UTC m=+246.789995658,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 11:10:52 crc kubenswrapper[4797]: I0216 11:10:52.476764 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a058c78302937da7fe3947aa5ce7da1d72c2699dbc872aef16d9dea9a19e9b27"} Feb 16 11:10:52 crc kubenswrapper[4797]: I0216 11:10:52.478382 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"abe1a7cca5ac1f9f13209f3e89b98926692cb32725cef5725933ef5838f0bcd6"} Feb 16 11:10:52 crc kubenswrapper[4797]: I0216 11:10:52.479152 4797 status_manager.go:851] "Failed to get status for pod" podUID="3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:52 crc kubenswrapper[4797]: E0216 11:10:52.479543 4797 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:10:53 crc kubenswrapper[4797]: E0216 11:10:53.929035 4797 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.192:6443: connect: connection refused" interval="6.4s" Feb 16 11:10:55 crc kubenswrapper[4797]: I0216 11:10:55.984815 4797 status_manager.go:851] "Failed to get status for pod" podUID="3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:57 crc kubenswrapper[4797]: I0216 11:10:57.982039 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:57 crc kubenswrapper[4797]: I0216 11:10:57.983105 4797 status_manager.go:851] "Failed to get status for pod" podUID="3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:58 crc kubenswrapper[4797]: I0216 11:10:58.016174 4797 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="705d9f4b-2610-4bce-8adf-a80a8c630c98" Feb 16 11:10:58 crc kubenswrapper[4797]: I0216 11:10:58.016421 4797 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="705d9f4b-2610-4bce-8adf-a80a8c630c98" Feb 16 11:10:58 crc kubenswrapper[4797]: E0216 11:10:58.016942 4797 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:58 crc kubenswrapper[4797]: I0216 11:10:58.018738 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:58 crc kubenswrapper[4797]: I0216 11:10:58.511440 4797 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="62207f4f6f0b7db0ab16acfbba823904082dd69b21b9a1b8b55d0e64cb84832f" exitCode=0 Feb 16 11:10:58 crc kubenswrapper[4797]: I0216 11:10:58.511769 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"62207f4f6f0b7db0ab16acfbba823904082dd69b21b9a1b8b55d0e64cb84832f"} Feb 16 11:10:58 crc kubenswrapper[4797]: I0216 11:10:58.511800 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ba1155259336b270ea93743bc75a4ae66eb29d28725b13697fd5abca5052b57f"} Feb 16 11:10:58 crc kubenswrapper[4797]: I0216 11:10:58.512049 4797 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="705d9f4b-2610-4bce-8adf-a80a8c630c98" Feb 16 11:10:58 crc kubenswrapper[4797]: I0216 11:10:58.512062 4797 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="705d9f4b-2610-4bce-8adf-a80a8c630c98" Feb 16 11:10:58 crc kubenswrapper[4797]: E0216 11:10:58.512367 4797 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:10:58 crc kubenswrapper[4797]: I0216 11:10:58.512714 4797 status_manager.go:851] "Failed to get status for pod" podUID="3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.192:6443: connect: connection refused" Feb 16 11:10:59 crc kubenswrapper[4797]: I0216 11:10:59.523035 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2b22a99fea42a32b081ed94fe6f8504343a223da13a2bb2735a57eaf601405f4"} Feb 16 11:10:59 crc kubenswrapper[4797]: I0216 11:10:59.523509 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"df6bcd7d6d289c0ed7b5a93c264e346c6f01d4ee9aadaa4192f7c2eddcab3e82"} Feb 16 11:10:59 crc kubenswrapper[4797]: I0216 11:10:59.523556 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"40542e455e905123505ed63bc2b589690e759bb2ba2e706dc800564bc395900c"} Feb 16 11:10:59 crc kubenswrapper[4797]: I0216 11:10:59.523611 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d46a01ea4999c0e054bbbec3d97c23707d172e5cc2f18d01a0480fb8dc6f74f3"} Feb 16 11:11:00 crc kubenswrapper[4797]: I0216 11:11:00.530914 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a0ed8dbf9ae3f278ba3621d5de7260d98bca419f6bdfedd74c4a08a9af3f463c"} Feb 16 11:11:00 crc kubenswrapper[4797]: I0216 11:11:00.531950 4797 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="705d9f4b-2610-4bce-8adf-a80a8c630c98" Feb 16 11:11:00 crc kubenswrapper[4797]: I0216 11:11:00.531977 4797 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="705d9f4b-2610-4bce-8adf-a80a8c630c98" Feb 16 11:11:00 crc kubenswrapper[4797]: I0216 11:11:00.532254 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:11:02 crc kubenswrapper[4797]: I0216 11:11:02.552202 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 11:11:02 crc kubenswrapper[4797]: I0216 11:11:02.552622 4797 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4" exitCode=1 Feb 16 11:11:02 crc kubenswrapper[4797]: I0216 11:11:02.552678 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4"} Feb 16 11:11:02 crc kubenswrapper[4797]: I0216 11:11:02.553258 4797 scope.go:117] "RemoveContainer" containerID="f7af7a88b618dd2ba868b2dd91b838e9ad85f7e8aa55108a2605e8744c6846a4" Feb 16 11:11:03 crc kubenswrapper[4797]: I0216 11:11:03.019752 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:11:03 crc kubenswrapper[4797]: I0216 11:11:03.019828 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:11:03 crc kubenswrapper[4797]: I0216 11:11:03.025121 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:11:03 crc kubenswrapper[4797]: I0216 11:11:03.555812 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:11:03 crc kubenswrapper[4797]: I0216 11:11:03.564292 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 11:11:03 crc kubenswrapper[4797]: I0216 11:11:03.564371 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3fe352adfa602c54a26d0580c7c286218cc1b575db4515eef80bd923d38a45a5"} Feb 16 11:11:05 crc kubenswrapper[4797]: I0216 11:11:05.540332 4797 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:11:05 crc kubenswrapper[4797]: I0216 11:11:05.575765 4797 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="705d9f4b-2610-4bce-8adf-a80a8c630c98" Feb 16 11:11:05 crc kubenswrapper[4797]: I0216 11:11:05.575800 4797 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="705d9f4b-2610-4bce-8adf-a80a8c630c98" Feb 16 11:11:05 crc kubenswrapper[4797]: I0216 11:11:05.578792 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:11:05 crc kubenswrapper[4797]: I0216 11:11:05.832330 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:11:05 crc kubenswrapper[4797]: I0216 11:11:05.836687 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:11:05 crc kubenswrapper[4797]: I0216 11:11:05.999323 4797 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="70b656a3-79b8-44ee-8903-6303611c1274" Feb 16 11:11:06 crc kubenswrapper[4797]: I0216 11:11:06.579612 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:11:06 crc kubenswrapper[4797]: I0216 11:11:06.581120 4797 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="705d9f4b-2610-4bce-8adf-a80a8c630c98" Feb 16 11:11:06 crc kubenswrapper[4797]: I0216 11:11:06.581149 4797 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="705d9f4b-2610-4bce-8adf-a80a8c630c98" Feb 16 11:11:06 crc kubenswrapper[4797]: I0216 11:11:06.584660 4797 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="70b656a3-79b8-44ee-8903-6303611c1274" Feb 16 11:11:11 crc kubenswrapper[4797]: I0216 11:11:11.843472 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 11:11:12 crc kubenswrapper[4797]: I0216 11:11:12.301555 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 11:11:12 crc kubenswrapper[4797]: I0216 11:11:12.463462 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 11:11:12 crc kubenswrapper[4797]: I0216 11:11:12.515732 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 11:11:12 crc kubenswrapper[4797]: I0216 11:11:12.968500 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 11:11:12 crc kubenswrapper[4797]: I0216 11:11:12.969073 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 11:11:12 crc kubenswrapper[4797]: I0216 11:11:12.969221 4797 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 11:11:12 crc kubenswrapper[4797]: I0216 11:11:12.969371 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 11:11:12 crc kubenswrapper[4797]: I0216 11:11:12.969455 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 11:11:12 crc kubenswrapper[4797]: I0216 11:11:12.976046 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 11:11:12 crc kubenswrapper[4797]: I0216 11:11:12.980929 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 11:11:13 crc kubenswrapper[4797]: I0216 11:11:13.068414 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 11:11:13 crc kubenswrapper[4797]: I0216 11:11:13.245147 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 11:11:13 crc kubenswrapper[4797]: I0216 11:11:13.388771 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 11:11:13 crc kubenswrapper[4797]: I0216 11:11:13.563689 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 11:11:13 crc kubenswrapper[4797]: I0216 11:11:13.572521 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 11:11:13 crc kubenswrapper[4797]: I0216 11:11:13.720533 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 11:11:13 crc kubenswrapper[4797]: I0216 11:11:13.769404 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 11:11:13 crc kubenswrapper[4797]: I0216 11:11:13.793875 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 11:11:13 crc kubenswrapper[4797]: I0216 11:11:13.794767 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 11:11:14 crc kubenswrapper[4797]: I0216 11:11:14.007146 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 11:11:14 crc kubenswrapper[4797]: I0216 11:11:14.115793 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 11:11:14 crc kubenswrapper[4797]: I0216 11:11:14.222464 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 11:11:14 crc kubenswrapper[4797]: I0216 11:11:14.279143 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 11:11:14 crc kubenswrapper[4797]: I0216 11:11:14.486557 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 11:11:14 crc kubenswrapper[4797]: I0216 11:11:14.917599 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 11:11:15 crc kubenswrapper[4797]: I0216 11:11:15.732833 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 11:11:16 crc kubenswrapper[4797]: I0216 11:11:16.089413 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 11:11:16 crc kubenswrapper[4797]: I0216 11:11:16.238225 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 11:11:16 crc kubenswrapper[4797]: I0216 11:11:16.246915 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 11:11:16 crc kubenswrapper[4797]: I0216 11:11:16.281262 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 11:11:16 crc kubenswrapper[4797]: I0216 11:11:16.305348 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 11:11:16 crc kubenswrapper[4797]: I0216 11:11:16.440313 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 11:11:16 crc kubenswrapper[4797]: I0216 11:11:16.479981 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 11:11:16 crc kubenswrapper[4797]: I0216 11:11:16.647893 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 11:11:17 crc kubenswrapper[4797]: I0216 11:11:17.131182 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 11:11:17 crc kubenswrapper[4797]: I0216 11:11:17.246229 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 11:11:17 crc kubenswrapper[4797]: I0216 11:11:17.277945 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 11:11:17 crc kubenswrapper[4797]: I0216 11:11:17.422555 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 11:11:17 crc kubenswrapper[4797]: I0216 11:11:17.642271 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 11:11:18 crc kubenswrapper[4797]: I0216 11:11:18.136398 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 11:11:18 crc kubenswrapper[4797]: I0216 11:11:18.596904 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 11:11:18 crc kubenswrapper[4797]: I0216 11:11:18.691735 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 11:11:18 crc kubenswrapper[4797]: I0216 11:11:18.723333 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 11:11:18 crc kubenswrapper[4797]: I0216 11:11:18.813158 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 11:11:19 crc kubenswrapper[4797]: I0216 11:11:19.231245 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 11:11:19 crc kubenswrapper[4797]: I0216 11:11:19.410117 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 11:11:19 crc kubenswrapper[4797]: I0216 11:11:19.426555 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 11:11:19 crc kubenswrapper[4797]: I0216 11:11:19.742910 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 11:11:19 crc kubenswrapper[4797]: I0216 11:11:19.820207 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 11:11:19 crc kubenswrapper[4797]: I0216 11:11:19.927031 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 11:11:20 crc kubenswrapper[4797]: I0216 11:11:20.011001 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 11:11:20 crc kubenswrapper[4797]: I0216 11:11:20.134597 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 11:11:20 crc kubenswrapper[4797]: I0216 11:11:20.189742 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 11:11:20 crc kubenswrapper[4797]: I0216 11:11:20.222933 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 11:11:20 crc kubenswrapper[4797]: I0216 11:11:20.341706 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 11:11:20 crc kubenswrapper[4797]: I0216 11:11:20.362261 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 11:11:20 crc kubenswrapper[4797]: I0216 11:11:20.421089 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 11:11:20 crc kubenswrapper[4797]: I0216 11:11:20.621384 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 11:11:20 crc kubenswrapper[4797]: I0216 11:11:20.640687 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 11:11:20 crc kubenswrapper[4797]: I0216 11:11:20.729976 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 11:11:20 crc kubenswrapper[4797]: I0216 11:11:20.966300 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 11:11:21 crc kubenswrapper[4797]: I0216 11:11:21.353892 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 11:11:21 crc kubenswrapper[4797]: I0216 11:11:21.358634 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 11:11:21 crc kubenswrapper[4797]: I0216 11:11:21.387222 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 11:11:21 crc kubenswrapper[4797]: I0216 11:11:21.403748 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 11:11:21 crc kubenswrapper[4797]: I0216 11:11:21.444895 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 11:11:21 crc kubenswrapper[4797]: I0216 11:11:21.499817 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 11:11:21 crc kubenswrapper[4797]: I0216 11:11:21.528096 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 11:11:21 crc kubenswrapper[4797]: I0216 11:11:21.537014 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 11:11:21 crc kubenswrapper[4797]: I0216 11:11:21.539540 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 11:11:21 crc kubenswrapper[4797]: I0216 11:11:21.623163 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 11:11:21 crc kubenswrapper[4797]: I0216 11:11:21.711706 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 11:11:21 crc kubenswrapper[4797]: I0216 11:11:21.754445 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 11:11:21 crc kubenswrapper[4797]: I0216 11:11:21.864009 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.029189 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.150806 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.210995 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.331707 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.404481 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.413033 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.541269 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.648212 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.674915 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.810601 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.843276 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.848020 4797 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.871886 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.923121 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.935687 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 11:11:22 crc kubenswrapper[4797]: I0216 11:11:22.948082 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.069966 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.115198 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.135779 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.205851 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.261607 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.337092 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.482019 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.503505 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.555407 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.641669 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.713166 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.795766 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.826195 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.832775 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.883178 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.909240 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 11:11:23 crc kubenswrapper[4797]: I0216 11:11:23.927121 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.033122 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.069255 4797 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.190520 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.203531 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.205851 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.259742 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.301661 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.511783 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.539806 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.571516 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.580475 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.599892 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.604512 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.720517 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.778502 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.848680 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 11:11:24 crc kubenswrapper[4797]: I0216 11:11:24.896136 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.063601 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.114719 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.129628 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.182896 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.251413 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.302323 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.333451 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.340971 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.472077 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.491879 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.506611 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.562355 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.572088 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.577517 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.680952 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.788944 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.847678 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.849259 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.881148 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 11:11:25 crc kubenswrapper[4797]: I0216 11:11:25.916377 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 11:11:26 crc kubenswrapper[4797]: I0216 11:11:26.236808 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 11:11:26 crc kubenswrapper[4797]: I0216 11:11:26.248797 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 11:11:26 crc kubenswrapper[4797]: I0216 11:11:26.281274 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 11:11:26 crc kubenswrapper[4797]: I0216 11:11:26.285112 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 11:11:26 crc kubenswrapper[4797]: I0216 11:11:26.316780 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 11:11:26 crc kubenswrapper[4797]: I0216 11:11:26.575929 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 11:11:26 crc kubenswrapper[4797]: I0216 11:11:26.612481 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 11:11:26 crc kubenswrapper[4797]: I0216 11:11:26.734934 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 11:11:26 crc kubenswrapper[4797]: I0216 11:11:26.769671 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 11:11:26 crc kubenswrapper[4797]: I0216 11:11:26.896812 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 11:11:26 crc kubenswrapper[4797]: I0216 11:11:26.991991 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 11:11:26 crc kubenswrapper[4797]: I0216 11:11:26.992977 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.012120 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.177092 4797 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.180676 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.180720 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.180737 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p44df","openshift-marketplace/community-operators-shcfr","openshift-marketplace/certified-operators-jndqp","openshift-marketplace/redhat-operators-kfgcw","openshift-marketplace/marketplace-operator-79b997595-5rgnb"] Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.180893 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" podUID="cb64bae9-2b5d-4ad4-b184-36f36908713a" containerName="marketplace-operator" containerID="cri-o://af466e5b9242d3d9bc04de99ebb4a27c0ca39008d50aabd80c162df39c365939" gracePeriod=30 Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.181006 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-shcfr" podUID="3cf1530b-a55a-41f5-bffc-b2094a0e8746" containerName="registry-server" containerID="cri-o://16115c90f478d602be583193b2730eb08145dd796292a9d10ad8b08cabde0b16" gracePeriod=30 Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.181351 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p44df" podUID="3f9dbba0-bbf1-49d8-84c2-158007da8a69" containerName="registry-server" containerID="cri-o://38aa2f337f4c447ea3f0c8f8af0b3211381c02ad9c332b2e11320d710bb9ad0a" gracePeriod=30 Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.181411 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kfgcw" podUID="ab1faf22-b706-4d66-909e-e1de1fe89b62" containerName="registry-server" containerID="cri-o://aa598df9c6358def2c9c6f8c1492aa8ad81bd6e01150f4a72cc258751111c8b6" gracePeriod=30 Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.181531 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jndqp" podUID="15fec828-6337-4c27-93ca-4b022a74486e" containerName="registry-server" containerID="cri-o://3e7c0c22f84c4c295fbbac634618737e3fca37d2be50b775046f7eb43079e0ef" gracePeriod=30 Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.223421 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.237023 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.236997986 podStartE2EDuration="22.236997986s" podCreationTimestamp="2026-02-16 11:11:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:11:27.236707888 +0000 UTC m=+281.956892878" watchObservedRunningTime="2026-02-16 11:11:27.236997986 +0000 UTC m=+281.957182986" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.253504 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.305938 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.335711 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.342095 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.403142 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.418347 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.519993 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.570381 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.587379 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.602868 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.672744 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.686316 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.687025 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.707774 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcqpk\" (UniqueName: \"kubernetes.io/projected/3cf1530b-a55a-41f5-bffc-b2094a0e8746-kube-api-access-kcqpk\") pod \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\" (UID: \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.707842 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf1530b-a55a-41f5-bffc-b2094a0e8746-utilities\") pod \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\" (UID: \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.707907 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf1530b-a55a-41f5-bffc-b2094a0e8746-catalog-content\") pod \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\" (UID: \"3cf1530b-a55a-41f5-bffc-b2094a0e8746\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.709082 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.709153 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf1530b-a55a-41f5-bffc-b2094a0e8746-utilities" (OuterVolumeSpecName: "utilities") pod "3cf1530b-a55a-41f5-bffc-b2094a0e8746" (UID: "3cf1530b-a55a-41f5-bffc-b2094a0e8746"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.713317 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf1530b-a55a-41f5-bffc-b2094a0e8746-kube-api-access-kcqpk" (OuterVolumeSpecName: "kube-api-access-kcqpk") pod "3cf1530b-a55a-41f5-bffc-b2094a0e8746" (UID: "3cf1530b-a55a-41f5-bffc-b2094a0e8746"). InnerVolumeSpecName "kube-api-access-kcqpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.729210 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.748943 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.770252 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.781447 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.781976 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cf1530b-a55a-41f5-bffc-b2094a0e8746-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cf1530b-a55a-41f5-bffc-b2094a0e8746" (UID: "3cf1530b-a55a-41f5-bffc-b2094a0e8746"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.808622 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9dbba0-bbf1-49d8-84c2-158007da8a69-utilities\") pod \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\" (UID: \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.808744 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzjr6\" (UniqueName: \"kubernetes.io/projected/3f9dbba0-bbf1-49d8-84c2-158007da8a69-kube-api-access-rzjr6\") pod \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\" (UID: \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.808784 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fec828-6337-4c27-93ca-4b022a74486e-utilities\") pod \"15fec828-6337-4c27-93ca-4b022a74486e\" (UID: \"15fec828-6337-4c27-93ca-4b022a74486e\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.808811 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fec828-6337-4c27-93ca-4b022a74486e-catalog-content\") pod \"15fec828-6337-4c27-93ca-4b022a74486e\" (UID: \"15fec828-6337-4c27-93ca-4b022a74486e\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.808838 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9dbba0-bbf1-49d8-84c2-158007da8a69-catalog-content\") pod \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\" (UID: \"3f9dbba0-bbf1-49d8-84c2-158007da8a69\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.808863 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8x2x\" (UniqueName: \"kubernetes.io/projected/15fec828-6337-4c27-93ca-4b022a74486e-kube-api-access-r8x2x\") pod \"15fec828-6337-4c27-93ca-4b022a74486e\" (UID: \"15fec828-6337-4c27-93ca-4b022a74486e\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.809364 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f9dbba0-bbf1-49d8-84c2-158007da8a69-utilities" (OuterVolumeSpecName: "utilities") pod "3f9dbba0-bbf1-49d8-84c2-158007da8a69" (UID: "3f9dbba0-bbf1-49d8-84c2-158007da8a69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.809543 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15fec828-6337-4c27-93ca-4b022a74486e-utilities" (OuterVolumeSpecName: "utilities") pod "15fec828-6337-4c27-93ca-4b022a74486e" (UID: "15fec828-6337-4c27-93ca-4b022a74486e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.809821 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cf1530b-a55a-41f5-bffc-b2094a0e8746-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.809873 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f9dbba0-bbf1-49d8-84c2-158007da8a69-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.809889 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcqpk\" (UniqueName: \"kubernetes.io/projected/3cf1530b-a55a-41f5-bffc-b2094a0e8746-kube-api-access-kcqpk\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.809905 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cf1530b-a55a-41f5-bffc-b2094a0e8746-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.813922 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.814177 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15fec828-6337-4c27-93ca-4b022a74486e-kube-api-access-r8x2x" (OuterVolumeSpecName: "kube-api-access-r8x2x") pod "15fec828-6337-4c27-93ca-4b022a74486e" (UID: "15fec828-6337-4c27-93ca-4b022a74486e"). InnerVolumeSpecName "kube-api-access-r8x2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.815211 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9dbba0-bbf1-49d8-84c2-158007da8a69-kube-api-access-rzjr6" (OuterVolumeSpecName: "kube-api-access-rzjr6") pod "3f9dbba0-bbf1-49d8-84c2-158007da8a69" (UID: "3f9dbba0-bbf1-49d8-84c2-158007da8a69"). InnerVolumeSpecName "kube-api-access-rzjr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.833975 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f9dbba0-bbf1-49d8-84c2-158007da8a69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f9dbba0-bbf1-49d8-84c2-158007da8a69" (UID: "3f9dbba0-bbf1-49d8-84c2-158007da8a69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.837067 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.860336 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15fec828-6337-4c27-93ca-4b022a74486e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15fec828-6337-4c27-93ca-4b022a74486e" (UID: "15fec828-6337-4c27-93ca-4b022a74486e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.881371 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.911174 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cb64bae9-2b5d-4ad4-b184-36f36908713a-marketplace-operator-metrics\") pod \"cb64bae9-2b5d-4ad4-b184-36f36908713a\" (UID: \"cb64bae9-2b5d-4ad4-b184-36f36908713a\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.911229 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1faf22-b706-4d66-909e-e1de1fe89b62-catalog-content\") pod \"ab1faf22-b706-4d66-909e-e1de1fe89b62\" (UID: \"ab1faf22-b706-4d66-909e-e1de1fe89b62\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.911304 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6gvs\" (UniqueName: \"kubernetes.io/projected/cb64bae9-2b5d-4ad4-b184-36f36908713a-kube-api-access-d6gvs\") pod \"cb64bae9-2b5d-4ad4-b184-36f36908713a\" (UID: \"cb64bae9-2b5d-4ad4-b184-36f36908713a\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.911354 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb64bae9-2b5d-4ad4-b184-36f36908713a-marketplace-trusted-ca\") pod \"cb64bae9-2b5d-4ad4-b184-36f36908713a\" (UID: \"cb64bae9-2b5d-4ad4-b184-36f36908713a\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.911382 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn96r\" (UniqueName: \"kubernetes.io/projected/ab1faf22-b706-4d66-909e-e1de1fe89b62-kube-api-access-hn96r\") pod \"ab1faf22-b706-4d66-909e-e1de1fe89b62\" (UID: \"ab1faf22-b706-4d66-909e-e1de1fe89b62\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.911409 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1faf22-b706-4d66-909e-e1de1fe89b62-utilities\") pod \"ab1faf22-b706-4d66-909e-e1de1fe89b62\" (UID: \"ab1faf22-b706-4d66-909e-e1de1fe89b62\") " Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.911694 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzjr6\" (UniqueName: \"kubernetes.io/projected/3f9dbba0-bbf1-49d8-84c2-158007da8a69-kube-api-access-rzjr6\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.911721 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15fec828-6337-4c27-93ca-4b022a74486e-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.911733 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15fec828-6337-4c27-93ca-4b022a74486e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.911747 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f9dbba0-bbf1-49d8-84c2-158007da8a69-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.911759 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8x2x\" (UniqueName: \"kubernetes.io/projected/15fec828-6337-4c27-93ca-4b022a74486e-kube-api-access-r8x2x\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.911984 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb64bae9-2b5d-4ad4-b184-36f36908713a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "cb64bae9-2b5d-4ad4-b184-36f36908713a" (UID: "cb64bae9-2b5d-4ad4-b184-36f36908713a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.912245 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab1faf22-b706-4d66-909e-e1de1fe89b62-utilities" (OuterVolumeSpecName: "utilities") pod "ab1faf22-b706-4d66-909e-e1de1fe89b62" (UID: "ab1faf22-b706-4d66-909e-e1de1fe89b62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.913843 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1faf22-b706-4d66-909e-e1de1fe89b62-kube-api-access-hn96r" (OuterVolumeSpecName: "kube-api-access-hn96r") pod "ab1faf22-b706-4d66-909e-e1de1fe89b62" (UID: "ab1faf22-b706-4d66-909e-e1de1fe89b62"). InnerVolumeSpecName "kube-api-access-hn96r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.914656 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb64bae9-2b5d-4ad4-b184-36f36908713a-kube-api-access-d6gvs" (OuterVolumeSpecName: "kube-api-access-d6gvs") pod "cb64bae9-2b5d-4ad4-b184-36f36908713a" (UID: "cb64bae9-2b5d-4ad4-b184-36f36908713a"). InnerVolumeSpecName "kube-api-access-d6gvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.915442 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb64bae9-2b5d-4ad4-b184-36f36908713a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "cb64bae9-2b5d-4ad4-b184-36f36908713a" (UID: "cb64bae9-2b5d-4ad4-b184-36f36908713a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:11:27 crc kubenswrapper[4797]: I0216 11:11:27.926877 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.012176 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6gvs\" (UniqueName: \"kubernetes.io/projected/cb64bae9-2b5d-4ad4-b184-36f36908713a-kube-api-access-d6gvs\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.012207 4797 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cb64bae9-2b5d-4ad4-b184-36f36908713a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.012217 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn96r\" (UniqueName: \"kubernetes.io/projected/ab1faf22-b706-4d66-909e-e1de1fe89b62-kube-api-access-hn96r\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.012228 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab1faf22-b706-4d66-909e-e1de1fe89b62-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.012241 4797 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cb64bae9-2b5d-4ad4-b184-36f36908713a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.019263 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.022462 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.054632 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab1faf22-b706-4d66-909e-e1de1fe89b62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab1faf22-b706-4d66-909e-e1de1fe89b62" (UID: "ab1faf22-b706-4d66-909e-e1de1fe89b62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.055563 4797 generic.go:334] "Generic (PLEG): container finished" podID="15fec828-6337-4c27-93ca-4b022a74486e" containerID="3e7c0c22f84c4c295fbbac634618737e3fca37d2be50b775046f7eb43079e0ef" exitCode=0 Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.055689 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jndqp" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.055698 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndqp" event={"ID":"15fec828-6337-4c27-93ca-4b022a74486e","Type":"ContainerDied","Data":"3e7c0c22f84c4c295fbbac634618737e3fca37d2be50b775046f7eb43079e0ef"} Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.055732 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jndqp" event={"ID":"15fec828-6337-4c27-93ca-4b022a74486e","Type":"ContainerDied","Data":"35ee0cda34d2eeb99d54699bfb2c3dfd245d940687aebede63adb82f5e4544f3"} Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.055775 4797 scope.go:117] "RemoveContainer" containerID="3e7c0c22f84c4c295fbbac634618737e3fca37d2be50b775046f7eb43079e0ef" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.068567 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.068607 4797 generic.go:334] "Generic (PLEG): container finished" podID="cb64bae9-2b5d-4ad4-b184-36f36908713a" containerID="af466e5b9242d3d9bc04de99ebb4a27c0ca39008d50aabd80c162df39c365939" exitCode=0 Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.068677 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" event={"ID":"cb64bae9-2b5d-4ad4-b184-36f36908713a","Type":"ContainerDied","Data":"af466e5b9242d3d9bc04de99ebb4a27c0ca39008d50aabd80c162df39c365939"} Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.068704 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5rgnb" event={"ID":"cb64bae9-2b5d-4ad4-b184-36f36908713a","Type":"ContainerDied","Data":"72c93867d259dc89bb5f98be2f92c24a2377924d4a55b672272945c1429748cc"} Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.073452 4797 generic.go:334] "Generic (PLEG): container finished" podID="3cf1530b-a55a-41f5-bffc-b2094a0e8746" containerID="16115c90f478d602be583193b2730eb08145dd796292a9d10ad8b08cabde0b16" exitCode=0 Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.073539 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shcfr" event={"ID":"3cf1530b-a55a-41f5-bffc-b2094a0e8746","Type":"ContainerDied","Data":"16115c90f478d602be583193b2730eb08145dd796292a9d10ad8b08cabde0b16"} Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.073563 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-shcfr" event={"ID":"3cf1530b-a55a-41f5-bffc-b2094a0e8746","Type":"ContainerDied","Data":"3801b2589dbefa6331383eceac6d82cd540888ccee7b93389fd1ef3c95d8498d"} Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.073636 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-shcfr" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.075519 4797 generic.go:334] "Generic (PLEG): container finished" podID="3f9dbba0-bbf1-49d8-84c2-158007da8a69" containerID="38aa2f337f4c447ea3f0c8f8af0b3211381c02ad9c332b2e11320d710bb9ad0a" exitCode=0 Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.075564 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p44df" event={"ID":"3f9dbba0-bbf1-49d8-84c2-158007da8a69","Type":"ContainerDied","Data":"38aa2f337f4c447ea3f0c8f8af0b3211381c02ad9c332b2e11320d710bb9ad0a"} Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.075597 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p44df" event={"ID":"3f9dbba0-bbf1-49d8-84c2-158007da8a69","Type":"ContainerDied","Data":"ff88a00e84fd300ad403c26adddd1abef55481b59488a63ccae05081b765f941"} Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.075652 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p44df" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.090498 4797 scope.go:117] "RemoveContainer" containerID="819a179c477ae32419510be5b216903a61a162a4059142e44c7cc9a1114659d9" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.090798 4797 generic.go:334] "Generic (PLEG): container finished" podID="ab1faf22-b706-4d66-909e-e1de1fe89b62" containerID="aa598df9c6358def2c9c6f8c1492aa8ad81bd6e01150f4a72cc258751111c8b6" exitCode=0 Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.090947 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfgcw" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.091060 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfgcw" event={"ID":"ab1faf22-b706-4d66-909e-e1de1fe89b62","Type":"ContainerDied","Data":"aa598df9c6358def2c9c6f8c1492aa8ad81bd6e01150f4a72cc258751111c8b6"} Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.091103 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfgcw" event={"ID":"ab1faf22-b706-4d66-909e-e1de1fe89b62","Type":"ContainerDied","Data":"7693cf53aafc2080c9e5be39a3c435c91b5a048081877778a6e84bf934607b06"} Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.095720 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jndqp"] Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.101163 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jndqp"] Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.106702 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rgnb"] Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.109267 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5rgnb"] Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.111776 4797 scope.go:117] "RemoveContainer" containerID="4d512f09d96acdd79694d7e8d5017aa33941968cc1fc0d6904f2aaaf2d8b8c22" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.112993 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab1faf22-b706-4d66-909e-e1de1fe89b62-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.115742 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.117803 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.134640 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p44df"] Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.138565 4797 scope.go:117] "RemoveContainer" containerID="3e7c0c22f84c4c295fbbac634618737e3fca37d2be50b775046f7eb43079e0ef" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.140369 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e7c0c22f84c4c295fbbac634618737e3fca37d2be50b775046f7eb43079e0ef\": container with ID starting with 3e7c0c22f84c4c295fbbac634618737e3fca37d2be50b775046f7eb43079e0ef not found: ID does not exist" containerID="3e7c0c22f84c4c295fbbac634618737e3fca37d2be50b775046f7eb43079e0ef" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.140421 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e7c0c22f84c4c295fbbac634618737e3fca37d2be50b775046f7eb43079e0ef"} err="failed to get container status \"3e7c0c22f84c4c295fbbac634618737e3fca37d2be50b775046f7eb43079e0ef\": rpc error: code = NotFound desc = could not find container \"3e7c0c22f84c4c295fbbac634618737e3fca37d2be50b775046f7eb43079e0ef\": container with ID starting with 3e7c0c22f84c4c295fbbac634618737e3fca37d2be50b775046f7eb43079e0ef not found: ID does not exist" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.140442 4797 scope.go:117] "RemoveContainer" containerID="819a179c477ae32419510be5b216903a61a162a4059142e44c7cc9a1114659d9" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.140895 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"819a179c477ae32419510be5b216903a61a162a4059142e44c7cc9a1114659d9\": container with ID starting with 819a179c477ae32419510be5b216903a61a162a4059142e44c7cc9a1114659d9 not found: ID does not exist" containerID="819a179c477ae32419510be5b216903a61a162a4059142e44c7cc9a1114659d9" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.140922 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"819a179c477ae32419510be5b216903a61a162a4059142e44c7cc9a1114659d9"} err="failed to get container status \"819a179c477ae32419510be5b216903a61a162a4059142e44c7cc9a1114659d9\": rpc error: code = NotFound desc = could not find container \"819a179c477ae32419510be5b216903a61a162a4059142e44c7cc9a1114659d9\": container with ID starting with 819a179c477ae32419510be5b216903a61a162a4059142e44c7cc9a1114659d9 not found: ID does not exist" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.140935 4797 scope.go:117] "RemoveContainer" containerID="4d512f09d96acdd79694d7e8d5017aa33941968cc1fc0d6904f2aaaf2d8b8c22" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.141108 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d512f09d96acdd79694d7e8d5017aa33941968cc1fc0d6904f2aaaf2d8b8c22\": container with ID starting with 4d512f09d96acdd79694d7e8d5017aa33941968cc1fc0d6904f2aaaf2d8b8c22 not found: ID does not exist" containerID="4d512f09d96acdd79694d7e8d5017aa33941968cc1fc0d6904f2aaaf2d8b8c22" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.141145 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d512f09d96acdd79694d7e8d5017aa33941968cc1fc0d6904f2aaaf2d8b8c22"} err="failed to get container status \"4d512f09d96acdd79694d7e8d5017aa33941968cc1fc0d6904f2aaaf2d8b8c22\": rpc error: code = NotFound desc = could not find container \"4d512f09d96acdd79694d7e8d5017aa33941968cc1fc0d6904f2aaaf2d8b8c22\": container with ID starting with 4d512f09d96acdd79694d7e8d5017aa33941968cc1fc0d6904f2aaaf2d8b8c22 not found: ID does not exist" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.141159 4797 scope.go:117] "RemoveContainer" containerID="af466e5b9242d3d9bc04de99ebb4a27c0ca39008d50aabd80c162df39c365939" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.141518 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p44df"] Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.154185 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-shcfr"] Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.156598 4797 scope.go:117] "RemoveContainer" containerID="af466e5b9242d3d9bc04de99ebb4a27c0ca39008d50aabd80c162df39c365939" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.157020 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af466e5b9242d3d9bc04de99ebb4a27c0ca39008d50aabd80c162df39c365939\": container with ID starting with af466e5b9242d3d9bc04de99ebb4a27c0ca39008d50aabd80c162df39c365939 not found: ID does not exist" containerID="af466e5b9242d3d9bc04de99ebb4a27c0ca39008d50aabd80c162df39c365939" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.157052 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af466e5b9242d3d9bc04de99ebb4a27c0ca39008d50aabd80c162df39c365939"} err="failed to get container status \"af466e5b9242d3d9bc04de99ebb4a27c0ca39008d50aabd80c162df39c365939\": rpc error: code = NotFound desc = could not find container \"af466e5b9242d3d9bc04de99ebb4a27c0ca39008d50aabd80c162df39c365939\": container with ID starting with af466e5b9242d3d9bc04de99ebb4a27c0ca39008d50aabd80c162df39c365939 not found: ID does not exist" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.157074 4797 scope.go:117] "RemoveContainer" containerID="16115c90f478d602be583193b2730eb08145dd796292a9d10ad8b08cabde0b16" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.161113 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-shcfr"] Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.164624 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kfgcw"] Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.167999 4797 scope.go:117] "RemoveContainer" containerID="df241dd09d8d3d0d7854e93562198cfd0c278027b5a971a2f08c2fbe748252ed" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.168300 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kfgcw"] Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.181917 4797 scope.go:117] "RemoveContainer" containerID="a801292df4c1e2d448d95ef14ec60725d5678230e66370914177f9854936e223" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.194464 4797 scope.go:117] "RemoveContainer" containerID="16115c90f478d602be583193b2730eb08145dd796292a9d10ad8b08cabde0b16" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.196106 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16115c90f478d602be583193b2730eb08145dd796292a9d10ad8b08cabde0b16\": container with ID starting with 16115c90f478d602be583193b2730eb08145dd796292a9d10ad8b08cabde0b16 not found: ID does not exist" containerID="16115c90f478d602be583193b2730eb08145dd796292a9d10ad8b08cabde0b16" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.196184 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16115c90f478d602be583193b2730eb08145dd796292a9d10ad8b08cabde0b16"} err="failed to get container status \"16115c90f478d602be583193b2730eb08145dd796292a9d10ad8b08cabde0b16\": rpc error: code = NotFound desc = could not find container \"16115c90f478d602be583193b2730eb08145dd796292a9d10ad8b08cabde0b16\": container with ID starting with 16115c90f478d602be583193b2730eb08145dd796292a9d10ad8b08cabde0b16 not found: ID does not exist" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.196245 4797 scope.go:117] "RemoveContainer" containerID="df241dd09d8d3d0d7854e93562198cfd0c278027b5a971a2f08c2fbe748252ed" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.197128 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df241dd09d8d3d0d7854e93562198cfd0c278027b5a971a2f08c2fbe748252ed\": container with ID starting with df241dd09d8d3d0d7854e93562198cfd0c278027b5a971a2f08c2fbe748252ed not found: ID does not exist" containerID="df241dd09d8d3d0d7854e93562198cfd0c278027b5a971a2f08c2fbe748252ed" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.197182 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df241dd09d8d3d0d7854e93562198cfd0c278027b5a971a2f08c2fbe748252ed"} err="failed to get container status \"df241dd09d8d3d0d7854e93562198cfd0c278027b5a971a2f08c2fbe748252ed\": rpc error: code = NotFound desc = could not find container \"df241dd09d8d3d0d7854e93562198cfd0c278027b5a971a2f08c2fbe748252ed\": container with ID starting with df241dd09d8d3d0d7854e93562198cfd0c278027b5a971a2f08c2fbe748252ed not found: ID does not exist" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.197205 4797 scope.go:117] "RemoveContainer" containerID="a801292df4c1e2d448d95ef14ec60725d5678230e66370914177f9854936e223" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.197613 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a801292df4c1e2d448d95ef14ec60725d5678230e66370914177f9854936e223\": container with ID starting with a801292df4c1e2d448d95ef14ec60725d5678230e66370914177f9854936e223 not found: ID does not exist" containerID="a801292df4c1e2d448d95ef14ec60725d5678230e66370914177f9854936e223" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.197654 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a801292df4c1e2d448d95ef14ec60725d5678230e66370914177f9854936e223"} err="failed to get container status \"a801292df4c1e2d448d95ef14ec60725d5678230e66370914177f9854936e223\": rpc error: code = NotFound desc = could not find container \"a801292df4c1e2d448d95ef14ec60725d5678230e66370914177f9854936e223\": container with ID starting with a801292df4c1e2d448d95ef14ec60725d5678230e66370914177f9854936e223 not found: ID does not exist" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.197684 4797 scope.go:117] "RemoveContainer" containerID="38aa2f337f4c447ea3f0c8f8af0b3211381c02ad9c332b2e11320d710bb9ad0a" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.217015 4797 scope.go:117] "RemoveContainer" containerID="148163e61071b38c776e7c69f324662ba4a1e3f84778e0f966a587c993b17865" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.233015 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.235780 4797 scope.go:117] "RemoveContainer" containerID="148765d7b86f5411c2ef70e3aa4779f382349e0f18d720939b244eb0a3ff56a1" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.253464 4797 scope.go:117] "RemoveContainer" containerID="38aa2f337f4c447ea3f0c8f8af0b3211381c02ad9c332b2e11320d710bb9ad0a" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.253947 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38aa2f337f4c447ea3f0c8f8af0b3211381c02ad9c332b2e11320d710bb9ad0a\": container with ID starting with 38aa2f337f4c447ea3f0c8f8af0b3211381c02ad9c332b2e11320d710bb9ad0a not found: ID does not exist" containerID="38aa2f337f4c447ea3f0c8f8af0b3211381c02ad9c332b2e11320d710bb9ad0a" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.253986 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38aa2f337f4c447ea3f0c8f8af0b3211381c02ad9c332b2e11320d710bb9ad0a"} err="failed to get container status \"38aa2f337f4c447ea3f0c8f8af0b3211381c02ad9c332b2e11320d710bb9ad0a\": rpc error: code = NotFound desc = could not find container \"38aa2f337f4c447ea3f0c8f8af0b3211381c02ad9c332b2e11320d710bb9ad0a\": container with ID starting with 38aa2f337f4c447ea3f0c8f8af0b3211381c02ad9c332b2e11320d710bb9ad0a not found: ID does not exist" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.254014 4797 scope.go:117] "RemoveContainer" containerID="148163e61071b38c776e7c69f324662ba4a1e3f84778e0f966a587c993b17865" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.254348 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"148163e61071b38c776e7c69f324662ba4a1e3f84778e0f966a587c993b17865\": container with ID starting with 148163e61071b38c776e7c69f324662ba4a1e3f84778e0f966a587c993b17865 not found: ID does not exist" containerID="148163e61071b38c776e7c69f324662ba4a1e3f84778e0f966a587c993b17865" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.254373 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"148163e61071b38c776e7c69f324662ba4a1e3f84778e0f966a587c993b17865"} err="failed to get container status \"148163e61071b38c776e7c69f324662ba4a1e3f84778e0f966a587c993b17865\": rpc error: code = NotFound desc = could not find container \"148163e61071b38c776e7c69f324662ba4a1e3f84778e0f966a587c993b17865\": container with ID starting with 148163e61071b38c776e7c69f324662ba4a1e3f84778e0f966a587c993b17865 not found: ID does not exist" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.254387 4797 scope.go:117] "RemoveContainer" containerID="148765d7b86f5411c2ef70e3aa4779f382349e0f18d720939b244eb0a3ff56a1" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.254819 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"148765d7b86f5411c2ef70e3aa4779f382349e0f18d720939b244eb0a3ff56a1\": container with ID starting with 148765d7b86f5411c2ef70e3aa4779f382349e0f18d720939b244eb0a3ff56a1 not found: ID does not exist" containerID="148765d7b86f5411c2ef70e3aa4779f382349e0f18d720939b244eb0a3ff56a1" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.254861 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"148765d7b86f5411c2ef70e3aa4779f382349e0f18d720939b244eb0a3ff56a1"} err="failed to get container status \"148765d7b86f5411c2ef70e3aa4779f382349e0f18d720939b244eb0a3ff56a1\": rpc error: code = NotFound desc = could not find container \"148765d7b86f5411c2ef70e3aa4779f382349e0f18d720939b244eb0a3ff56a1\": container with ID starting with 148765d7b86f5411c2ef70e3aa4779f382349e0f18d720939b244eb0a3ff56a1 not found: ID does not exist" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.254890 4797 scope.go:117] "RemoveContainer" containerID="aa598df9c6358def2c9c6f8c1492aa8ad81bd6e01150f4a72cc258751111c8b6" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.266908 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.268297 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.269113 4797 scope.go:117] "RemoveContainer" containerID="7208f1606bdf1a9f0e60f9b65a338b14b2e5ed41aba22136a4d328a138f79224" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.284956 4797 scope.go:117] "RemoveContainer" containerID="64b6c5c5b02b3b966d5e7e7e0a319dd3face43097737a7641b17a679c63b3ba5" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.286391 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.293811 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.334370 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.344205 4797 scope.go:117] "RemoveContainer" containerID="aa598df9c6358def2c9c6f8c1492aa8ad81bd6e01150f4a72cc258751111c8b6" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.344826 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa598df9c6358def2c9c6f8c1492aa8ad81bd6e01150f4a72cc258751111c8b6\": container with ID starting with aa598df9c6358def2c9c6f8c1492aa8ad81bd6e01150f4a72cc258751111c8b6 not found: ID does not exist" containerID="aa598df9c6358def2c9c6f8c1492aa8ad81bd6e01150f4a72cc258751111c8b6" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.344923 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa598df9c6358def2c9c6f8c1492aa8ad81bd6e01150f4a72cc258751111c8b6"} err="failed to get container status \"aa598df9c6358def2c9c6f8c1492aa8ad81bd6e01150f4a72cc258751111c8b6\": rpc error: code = NotFound desc = could not find container \"aa598df9c6358def2c9c6f8c1492aa8ad81bd6e01150f4a72cc258751111c8b6\": container with ID starting with aa598df9c6358def2c9c6f8c1492aa8ad81bd6e01150f4a72cc258751111c8b6 not found: ID does not exist" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.345033 4797 scope.go:117] "RemoveContainer" containerID="7208f1606bdf1a9f0e60f9b65a338b14b2e5ed41aba22136a4d328a138f79224" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.345632 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7208f1606bdf1a9f0e60f9b65a338b14b2e5ed41aba22136a4d328a138f79224\": container with ID starting with 7208f1606bdf1a9f0e60f9b65a338b14b2e5ed41aba22136a4d328a138f79224 not found: ID does not exist" containerID="7208f1606bdf1a9f0e60f9b65a338b14b2e5ed41aba22136a4d328a138f79224" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.345675 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7208f1606bdf1a9f0e60f9b65a338b14b2e5ed41aba22136a4d328a138f79224"} err="failed to get container status \"7208f1606bdf1a9f0e60f9b65a338b14b2e5ed41aba22136a4d328a138f79224\": rpc error: code = NotFound desc = could not find container \"7208f1606bdf1a9f0e60f9b65a338b14b2e5ed41aba22136a4d328a138f79224\": container with ID starting with 7208f1606bdf1a9f0e60f9b65a338b14b2e5ed41aba22136a4d328a138f79224 not found: ID does not exist" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.345710 4797 scope.go:117] "RemoveContainer" containerID="64b6c5c5b02b3b966d5e7e7e0a319dd3face43097737a7641b17a679c63b3ba5" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.346003 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64b6c5c5b02b3b966d5e7e7e0a319dd3face43097737a7641b17a679c63b3ba5\": container with ID starting with 64b6c5c5b02b3b966d5e7e7e0a319dd3face43097737a7641b17a679c63b3ba5 not found: ID does not exist" containerID="64b6c5c5b02b3b966d5e7e7e0a319dd3face43097737a7641b17a679c63b3ba5" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.346032 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b6c5c5b02b3b966d5e7e7e0a319dd3face43097737a7641b17a679c63b3ba5"} err="failed to get container status \"64b6c5c5b02b3b966d5e7e7e0a319dd3face43097737a7641b17a679c63b3ba5\": rpc error: code = NotFound desc = could not find container \"64b6c5c5b02b3b966d5e7e7e0a319dd3face43097737a7641b17a679c63b3ba5\": container with ID starting with 64b6c5c5b02b3b966d5e7e7e0a319dd3face43097737a7641b17a679c63b3ba5 not found: ID does not exist" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464204 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mnplw"] Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.464521 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf1530b-a55a-41f5-bffc-b2094a0e8746" containerName="extract-utilities" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464543 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf1530b-a55a-41f5-bffc-b2094a0e8746" containerName="extract-utilities" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.464559 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1faf22-b706-4d66-909e-e1de1fe89b62" containerName="extract-utilities" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464572 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1faf22-b706-4d66-909e-e1de1fe89b62" containerName="extract-utilities" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.464664 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9dbba0-bbf1-49d8-84c2-158007da8a69" containerName="registry-server" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464680 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9dbba0-bbf1-49d8-84c2-158007da8a69" containerName="registry-server" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.464700 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1faf22-b706-4d66-909e-e1de1fe89b62" containerName="extract-content" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464717 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1faf22-b706-4d66-909e-e1de1fe89b62" containerName="extract-content" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.464744 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15fec828-6337-4c27-93ca-4b022a74486e" containerName="registry-server" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464758 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fec828-6337-4c27-93ca-4b022a74486e" containerName="registry-server" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.464777 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" containerName="installer" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464788 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" containerName="installer" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.464803 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf1530b-a55a-41f5-bffc-b2094a0e8746" containerName="extract-content" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464818 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf1530b-a55a-41f5-bffc-b2094a0e8746" containerName="extract-content" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.464834 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15fec828-6337-4c27-93ca-4b022a74486e" containerName="extract-utilities" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464846 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fec828-6337-4c27-93ca-4b022a74486e" containerName="extract-utilities" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.464863 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1faf22-b706-4d66-909e-e1de1fe89b62" containerName="registry-server" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464875 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1faf22-b706-4d66-909e-e1de1fe89b62" containerName="registry-server" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.464889 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9dbba0-bbf1-49d8-84c2-158007da8a69" containerName="extract-utilities" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464900 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9dbba0-bbf1-49d8-84c2-158007da8a69" containerName="extract-utilities" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.464917 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15fec828-6337-4c27-93ca-4b022a74486e" containerName="extract-content" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464929 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="15fec828-6337-4c27-93ca-4b022a74486e" containerName="extract-content" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.464943 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cf1530b-a55a-41f5-bffc-b2094a0e8746" containerName="registry-server" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464955 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf1530b-a55a-41f5-bffc-b2094a0e8746" containerName="registry-server" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.464972 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb64bae9-2b5d-4ad4-b184-36f36908713a" containerName="marketplace-operator" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.464984 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb64bae9-2b5d-4ad4-b184-36f36908713a" containerName="marketplace-operator" Feb 16 11:11:28 crc kubenswrapper[4797]: E0216 11:11:28.465001 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9dbba0-bbf1-49d8-84c2-158007da8a69" containerName="extract-content" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.465017 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9dbba0-bbf1-49d8-84c2-158007da8a69" containerName="extract-content" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.465177 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb64bae9-2b5d-4ad4-b184-36f36908713a" containerName="marketplace-operator" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.465201 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="15fec828-6337-4c27-93ca-4b022a74486e" containerName="registry-server" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.465217 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9dbba0-bbf1-49d8-84c2-158007da8a69" containerName="registry-server" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.465235 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1faf22-b706-4d66-909e-e1de1fe89b62" containerName="registry-server" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.465254 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c82693e-fd79-4a2c-97d5-ef5facb4fe8d" containerName="installer" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.465275 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cf1530b-a55a-41f5-bffc-b2094a0e8746" containerName="registry-server" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.466040 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.470197 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.471659 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.474452 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.475034 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.480061 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mnplw"] Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.514181 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.535193 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.567002 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.580736 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.600393 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.619193 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31217ef5-71b8-4a30-b0c4-f5cd8a51e372-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mnplw\" (UID: \"31217ef5-71b8-4a30-b0c4-f5cd8a51e372\") " pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.619247 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58f4w\" (UniqueName: \"kubernetes.io/projected/31217ef5-71b8-4a30-b0c4-f5cd8a51e372-kube-api-access-58f4w\") pod \"marketplace-operator-79b997595-mnplw\" (UID: \"31217ef5-71b8-4a30-b0c4-f5cd8a51e372\") " pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.619275 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/31217ef5-71b8-4a30-b0c4-f5cd8a51e372-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mnplw\" (UID: \"31217ef5-71b8-4a30-b0c4-f5cd8a51e372\") " pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.667443 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.696947 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.721338 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31217ef5-71b8-4a30-b0c4-f5cd8a51e372-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mnplw\" (UID: \"31217ef5-71b8-4a30-b0c4-f5cd8a51e372\") " pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.721454 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58f4w\" (UniqueName: \"kubernetes.io/projected/31217ef5-71b8-4a30-b0c4-f5cd8a51e372-kube-api-access-58f4w\") pod \"marketplace-operator-79b997595-mnplw\" (UID: \"31217ef5-71b8-4a30-b0c4-f5cd8a51e372\") " pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.721514 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/31217ef5-71b8-4a30-b0c4-f5cd8a51e372-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mnplw\" (UID: \"31217ef5-71b8-4a30-b0c4-f5cd8a51e372\") " pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.722771 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31217ef5-71b8-4a30-b0c4-f5cd8a51e372-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mnplw\" (UID: \"31217ef5-71b8-4a30-b0c4-f5cd8a51e372\") " pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.740791 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/31217ef5-71b8-4a30-b0c4-f5cd8a51e372-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mnplw\" (UID: \"31217ef5-71b8-4a30-b0c4-f5cd8a51e372\") " pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.740904 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58f4w\" (UniqueName: \"kubernetes.io/projected/31217ef5-71b8-4a30-b0c4-f5cd8a51e372-kube-api-access-58f4w\") pod \"marketplace-operator-79b997595-mnplw\" (UID: \"31217ef5-71b8-4a30-b0c4-f5cd8a51e372\") " pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.774496 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.795523 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.827417 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.873298 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 11:11:28 crc kubenswrapper[4797]: I0216 11:11:28.959525 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.052146 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mnplw"] Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.090011 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.099154 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" event={"ID":"31217ef5-71b8-4a30-b0c4-f5cd8a51e372","Type":"ContainerStarted","Data":"dcc97181a3abaa58fc97a52207b1d7c0968b4b02c981303e4953d19cb17be92c"} Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.193020 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.221278 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.250639 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.261751 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.298878 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.313239 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.577976 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.698213 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.707532 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.752917 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.833129 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.922040 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.981684 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.989911 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15fec828-6337-4c27-93ca-4b022a74486e" path="/var/lib/kubelet/pods/15fec828-6337-4c27-93ca-4b022a74486e/volumes" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.990989 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf1530b-a55a-41f5-bffc-b2094a0e8746" path="/var/lib/kubelet/pods/3cf1530b-a55a-41f5-bffc-b2094a0e8746/volumes" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.991848 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9dbba0-bbf1-49d8-84c2-158007da8a69" path="/var/lib/kubelet/pods/3f9dbba0-bbf1-49d8-84c2-158007da8a69/volumes" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.993649 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab1faf22-b706-4d66-909e-e1de1fe89b62" path="/var/lib/kubelet/pods/ab1faf22-b706-4d66-909e-e1de1fe89b62/volumes" Feb 16 11:11:29 crc kubenswrapper[4797]: I0216 11:11:29.994622 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb64bae9-2b5d-4ad4-b184-36f36908713a" path="/var/lib/kubelet/pods/cb64bae9-2b5d-4ad4-b184-36f36908713a/volumes" Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.110127 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.113849 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" event={"ID":"31217ef5-71b8-4a30-b0c4-f5cd8a51e372","Type":"ContainerStarted","Data":"7f7a87775ff36cdfb3f72976cf1486c42fec19a97f8fbb72ca8345da3badacea"} Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.113870 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.114292 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.119102 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.119759 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.147334 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mnplw" podStartSLOduration=7.14730741 podStartE2EDuration="7.14730741s" podCreationTimestamp="2026-02-16 11:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:11:30.146910909 +0000 UTC m=+284.867095929" watchObservedRunningTime="2026-02-16 11:11:30.14730741 +0000 UTC m=+284.867492410" Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.149309 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.319991 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.334797 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.458213 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.493363 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.592565 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 11:11:30 crc kubenswrapper[4797]: I0216 11:11:30.925388 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 11:11:31 crc kubenswrapper[4797]: I0216 11:11:31.155467 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 11:11:31 crc kubenswrapper[4797]: I0216 11:11:31.271784 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 11:11:31 crc kubenswrapper[4797]: I0216 11:11:31.601804 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 11:11:31 crc kubenswrapper[4797]: I0216 11:11:31.658509 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 11:11:31 crc kubenswrapper[4797]: I0216 11:11:31.690652 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 11:11:32 crc kubenswrapper[4797]: I0216 11:11:32.012642 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 11:11:32 crc kubenswrapper[4797]: I0216 11:11:32.020827 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 11:11:32 crc kubenswrapper[4797]: I0216 11:11:32.125038 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 11:11:32 crc kubenswrapper[4797]: I0216 11:11:32.189775 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 11:11:32 crc kubenswrapper[4797]: I0216 11:11:32.334357 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 11:11:32 crc kubenswrapper[4797]: I0216 11:11:32.371954 4797 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 11:11:32 crc kubenswrapper[4797]: I0216 11:11:32.378296 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 11:11:32 crc kubenswrapper[4797]: I0216 11:11:32.579951 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 11:11:32 crc kubenswrapper[4797]: I0216 11:11:32.684542 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 11:11:32 crc kubenswrapper[4797]: I0216 11:11:32.686324 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 11:11:32 crc kubenswrapper[4797]: I0216 11:11:32.771675 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 11:11:33 crc kubenswrapper[4797]: I0216 11:11:33.502186 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 11:11:33 crc kubenswrapper[4797]: I0216 11:11:33.673803 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 11:11:33 crc kubenswrapper[4797]: I0216 11:11:33.893684 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 11:11:34 crc kubenswrapper[4797]: I0216 11:11:34.932571 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 11:11:35 crc kubenswrapper[4797]: I0216 11:11:35.023791 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 11:11:35 crc kubenswrapper[4797]: I0216 11:11:35.133916 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 11:11:35 crc kubenswrapper[4797]: I0216 11:11:35.440913 4797 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 11:11:35 crc kubenswrapper[4797]: I0216 11:11:35.680902 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 11:11:35 crc kubenswrapper[4797]: I0216 11:11:35.682943 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 11:11:35 crc kubenswrapper[4797]: I0216 11:11:35.914506 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 11:11:36 crc kubenswrapper[4797]: I0216 11:11:36.290288 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 11:11:39 crc kubenswrapper[4797]: I0216 11:11:39.236546 4797 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 11:11:39 crc kubenswrapper[4797]: I0216 11:11:39.237081 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a058c78302937da7fe3947aa5ce7da1d72c2699dbc872aef16d9dea9a19e9b27" gracePeriod=5 Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.352159 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.352798 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.539913 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.540414 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.540600 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.540771 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.541132 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.541360 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.541562 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.541689 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.541705 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.542698 4797 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.542898 4797 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.543054 4797 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.543214 4797 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.550019 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:11:44 crc kubenswrapper[4797]: I0216 11:11:44.645036 4797 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:45 crc kubenswrapper[4797]: I0216 11:11:45.211007 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 11:11:45 crc kubenswrapper[4797]: I0216 11:11:45.211059 4797 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a058c78302937da7fe3947aa5ce7da1d72c2699dbc872aef16d9dea9a19e9b27" exitCode=137 Feb 16 11:11:45 crc kubenswrapper[4797]: I0216 11:11:45.211098 4797 scope.go:117] "RemoveContainer" containerID="a058c78302937da7fe3947aa5ce7da1d72c2699dbc872aef16d9dea9a19e9b27" Feb 16 11:11:45 crc kubenswrapper[4797]: I0216 11:11:45.212097 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 11:11:45 crc kubenswrapper[4797]: I0216 11:11:45.228774 4797 scope.go:117] "RemoveContainer" containerID="a058c78302937da7fe3947aa5ce7da1d72c2699dbc872aef16d9dea9a19e9b27" Feb 16 11:11:45 crc kubenswrapper[4797]: E0216 11:11:45.229843 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a058c78302937da7fe3947aa5ce7da1d72c2699dbc872aef16d9dea9a19e9b27\": container with ID starting with a058c78302937da7fe3947aa5ce7da1d72c2699dbc872aef16d9dea9a19e9b27 not found: ID does not exist" containerID="a058c78302937da7fe3947aa5ce7da1d72c2699dbc872aef16d9dea9a19e9b27" Feb 16 11:11:45 crc kubenswrapper[4797]: I0216 11:11:45.229904 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a058c78302937da7fe3947aa5ce7da1d72c2699dbc872aef16d9dea9a19e9b27"} err="failed to get container status \"a058c78302937da7fe3947aa5ce7da1d72c2699dbc872aef16d9dea9a19e9b27\": rpc error: code = NotFound desc = could not find container \"a058c78302937da7fe3947aa5ce7da1d72c2699dbc872aef16d9dea9a19e9b27\": container with ID starting with a058c78302937da7fe3947aa5ce7da1d72c2699dbc872aef16d9dea9a19e9b27 not found: ID does not exist" Feb 16 11:11:45 crc kubenswrapper[4797]: I0216 11:11:45.771926 4797 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 11:11:45 crc kubenswrapper[4797]: I0216 11:11:45.988412 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.411043 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnvtz"] Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.411739 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" podUID="d2f2e6ac-38ac-41dd-b195-7fe50447270e" containerName="controller-manager" containerID="cri-o://1fbdec4ac86e2a78dc3a02f6fb32efe5485171841bddf970c4c44399dfc4a352" gracePeriod=30 Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.538533 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc"] Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.538815 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" podUID="45b58dea-daa7-4b11-b6b9-c5a9471f1129" containerName="route-controller-manager" containerID="cri-o://e6dfa23b3c0362cd4046157b02815ad8fbc94cf470268080ef3b1ef4885aac10" gracePeriod=30 Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.755071 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.861764 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.942272 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-proxy-ca-bundles\") pod \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.942354 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-client-ca\") pod \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.942397 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2f2e6ac-38ac-41dd-b195-7fe50447270e-serving-cert\") pod \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.942427 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-config\") pod \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.942442 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nrbb\" (UniqueName: \"kubernetes.io/projected/d2f2e6ac-38ac-41dd-b195-7fe50447270e-kube-api-access-9nrbb\") pod \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\" (UID: \"d2f2e6ac-38ac-41dd-b195-7fe50447270e\") " Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.943034 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d2f2e6ac-38ac-41dd-b195-7fe50447270e" (UID: "d2f2e6ac-38ac-41dd-b195-7fe50447270e"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.943179 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-config" (OuterVolumeSpecName: "config") pod "d2f2e6ac-38ac-41dd-b195-7fe50447270e" (UID: "d2f2e6ac-38ac-41dd-b195-7fe50447270e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.943603 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-client-ca" (OuterVolumeSpecName: "client-ca") pod "d2f2e6ac-38ac-41dd-b195-7fe50447270e" (UID: "d2f2e6ac-38ac-41dd-b195-7fe50447270e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.947665 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f2e6ac-38ac-41dd-b195-7fe50447270e-kube-api-access-9nrbb" (OuterVolumeSpecName: "kube-api-access-9nrbb") pod "d2f2e6ac-38ac-41dd-b195-7fe50447270e" (UID: "d2f2e6ac-38ac-41dd-b195-7fe50447270e"). InnerVolumeSpecName "kube-api-access-9nrbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:11:52 crc kubenswrapper[4797]: I0216 11:11:52.948044 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2f2e6ac-38ac-41dd-b195-7fe50447270e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d2f2e6ac-38ac-41dd-b195-7fe50447270e" (UID: "d2f2e6ac-38ac-41dd-b195-7fe50447270e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.043555 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b58dea-daa7-4b11-b6b9-c5a9471f1129-config\") pod \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.044472 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45b58dea-daa7-4b11-b6b9-c5a9471f1129-serving-cert\") pod \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.044549 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skvsl\" (UniqueName: \"kubernetes.io/projected/45b58dea-daa7-4b11-b6b9-c5a9471f1129-kube-api-access-skvsl\") pod \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.044594 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45b58dea-daa7-4b11-b6b9-c5a9471f1129-client-ca\") pod \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\" (UID: \"45b58dea-daa7-4b11-b6b9-c5a9471f1129\") " Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.044835 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.044856 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nrbb\" (UniqueName: \"kubernetes.io/projected/d2f2e6ac-38ac-41dd-b195-7fe50447270e-kube-api-access-9nrbb\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.044869 4797 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.044881 4797 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2f2e6ac-38ac-41dd-b195-7fe50447270e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.044891 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2f2e6ac-38ac-41dd-b195-7fe50447270e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.044403 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b58dea-daa7-4b11-b6b9-c5a9471f1129-config" (OuterVolumeSpecName: "config") pod "45b58dea-daa7-4b11-b6b9-c5a9471f1129" (UID: "45b58dea-daa7-4b11-b6b9-c5a9471f1129"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.045662 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b58dea-daa7-4b11-b6b9-c5a9471f1129-client-ca" (OuterVolumeSpecName: "client-ca") pod "45b58dea-daa7-4b11-b6b9-c5a9471f1129" (UID: "45b58dea-daa7-4b11-b6b9-c5a9471f1129"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.047535 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45b58dea-daa7-4b11-b6b9-c5a9471f1129-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "45b58dea-daa7-4b11-b6b9-c5a9471f1129" (UID: "45b58dea-daa7-4b11-b6b9-c5a9471f1129"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.049337 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b58dea-daa7-4b11-b6b9-c5a9471f1129-kube-api-access-skvsl" (OuterVolumeSpecName: "kube-api-access-skvsl") pod "45b58dea-daa7-4b11-b6b9-c5a9471f1129" (UID: "45b58dea-daa7-4b11-b6b9-c5a9471f1129"). InnerVolumeSpecName "kube-api-access-skvsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.145813 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45b58dea-daa7-4b11-b6b9-c5a9471f1129-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.145874 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45b58dea-daa7-4b11-b6b9-c5a9471f1129-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.145897 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skvsl\" (UniqueName: \"kubernetes.io/projected/45b58dea-daa7-4b11-b6b9-c5a9471f1129-kube-api-access-skvsl\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.145917 4797 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/45b58dea-daa7-4b11-b6b9-c5a9471f1129-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.257225 4797 generic.go:334] "Generic (PLEG): container finished" podID="45b58dea-daa7-4b11-b6b9-c5a9471f1129" containerID="e6dfa23b3c0362cd4046157b02815ad8fbc94cf470268080ef3b1ef4885aac10" exitCode=0 Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.257325 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" event={"ID":"45b58dea-daa7-4b11-b6b9-c5a9471f1129","Type":"ContainerDied","Data":"e6dfa23b3c0362cd4046157b02815ad8fbc94cf470268080ef3b1ef4885aac10"} Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.257363 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" event={"ID":"45b58dea-daa7-4b11-b6b9-c5a9471f1129","Type":"ContainerDied","Data":"246504eb927279a1e92fe562a0bcd7ca44503a1c78b72301fcaf7e5e165a79a3"} Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.257393 4797 scope.go:117] "RemoveContainer" containerID="e6dfa23b3c0362cd4046157b02815ad8fbc94cf470268080ef3b1ef4885aac10" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.257554 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.259932 4797 generic.go:334] "Generic (PLEG): container finished" podID="d2f2e6ac-38ac-41dd-b195-7fe50447270e" containerID="1fbdec4ac86e2a78dc3a02f6fb32efe5485171841bddf970c4c44399dfc4a352" exitCode=0 Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.259993 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" event={"ID":"d2f2e6ac-38ac-41dd-b195-7fe50447270e","Type":"ContainerDied","Data":"1fbdec4ac86e2a78dc3a02f6fb32efe5485171841bddf970c4c44399dfc4a352"} Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.260010 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.260033 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hnvtz" event={"ID":"d2f2e6ac-38ac-41dd-b195-7fe50447270e","Type":"ContainerDied","Data":"520ed31b85ed2bd43075376cbca9e3fcb39833a02605484e45f426a9f831ec56"} Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.278861 4797 scope.go:117] "RemoveContainer" containerID="e6dfa23b3c0362cd4046157b02815ad8fbc94cf470268080ef3b1ef4885aac10" Feb 16 11:11:53 crc kubenswrapper[4797]: E0216 11:11:53.279774 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6dfa23b3c0362cd4046157b02815ad8fbc94cf470268080ef3b1ef4885aac10\": container with ID starting with e6dfa23b3c0362cd4046157b02815ad8fbc94cf470268080ef3b1ef4885aac10 not found: ID does not exist" containerID="e6dfa23b3c0362cd4046157b02815ad8fbc94cf470268080ef3b1ef4885aac10" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.279844 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6dfa23b3c0362cd4046157b02815ad8fbc94cf470268080ef3b1ef4885aac10"} err="failed to get container status \"e6dfa23b3c0362cd4046157b02815ad8fbc94cf470268080ef3b1ef4885aac10\": rpc error: code = NotFound desc = could not find container \"e6dfa23b3c0362cd4046157b02815ad8fbc94cf470268080ef3b1ef4885aac10\": container with ID starting with e6dfa23b3c0362cd4046157b02815ad8fbc94cf470268080ef3b1ef4885aac10 not found: ID does not exist" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.279888 4797 scope.go:117] "RemoveContainer" containerID="1fbdec4ac86e2a78dc3a02f6fb32efe5485171841bddf970c4c44399dfc4a352" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.304105 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnvtz"] Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.306530 4797 scope.go:117] "RemoveContainer" containerID="1fbdec4ac86e2a78dc3a02f6fb32efe5485171841bddf970c4c44399dfc4a352" Feb 16 11:11:53 crc kubenswrapper[4797]: E0216 11:11:53.307509 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fbdec4ac86e2a78dc3a02f6fb32efe5485171841bddf970c4c44399dfc4a352\": container with ID starting with 1fbdec4ac86e2a78dc3a02f6fb32efe5485171841bddf970c4c44399dfc4a352 not found: ID does not exist" containerID="1fbdec4ac86e2a78dc3a02f6fb32efe5485171841bddf970c4c44399dfc4a352" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.307819 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fbdec4ac86e2a78dc3a02f6fb32efe5485171841bddf970c4c44399dfc4a352"} err="failed to get container status \"1fbdec4ac86e2a78dc3a02f6fb32efe5485171841bddf970c4c44399dfc4a352\": rpc error: code = NotFound desc = could not find container \"1fbdec4ac86e2a78dc3a02f6fb32efe5485171841bddf970c4c44399dfc4a352\": container with ID starting with 1fbdec4ac86e2a78dc3a02f6fb32efe5485171841bddf970c4c44399dfc4a352 not found: ID does not exist" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.315722 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnvtz"] Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.319318 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc"] Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.323147 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dxrpc"] Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.500517 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66b66c46c7-scgb2"] Feb 16 11:11:53 crc kubenswrapper[4797]: E0216 11:11:53.500788 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f2e6ac-38ac-41dd-b195-7fe50447270e" containerName="controller-manager" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.500804 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f2e6ac-38ac-41dd-b195-7fe50447270e" containerName="controller-manager" Feb 16 11:11:53 crc kubenswrapper[4797]: E0216 11:11:53.500820 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.500830 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 11:11:53 crc kubenswrapper[4797]: E0216 11:11:53.500846 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b58dea-daa7-4b11-b6b9-c5a9471f1129" containerName="route-controller-manager" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.500857 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b58dea-daa7-4b11-b6b9-c5a9471f1129" containerName="route-controller-manager" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.500998 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.501013 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b58dea-daa7-4b11-b6b9-c5a9471f1129" containerName="route-controller-manager" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.501038 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f2e6ac-38ac-41dd-b195-7fe50447270e" containerName="controller-manager" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.501476 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.503814 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.504704 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.505437 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.506085 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.507648 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.509201 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.527742 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.529270 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66b66c46c7-scgb2"] Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.652920 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z2f4\" (UniqueName: \"kubernetes.io/projected/33840716-fb19-4f1b-9d04-b28168daf418-kube-api-access-4z2f4\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.652979 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33840716-fb19-4f1b-9d04-b28168daf418-serving-cert\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.653030 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-client-ca\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.653074 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-proxy-ca-bundles\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.653206 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-config\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.712184 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66b66c46c7-scgb2"] Feb 16 11:11:53 crc kubenswrapper[4797]: E0216 11:11:53.712639 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-4z2f4 proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" podUID="33840716-fb19-4f1b-9d04-b28168daf418" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.745477 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t"] Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.746054 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.749928 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.750542 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.750638 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.751473 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.752258 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.753941 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-config\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.753970 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z2f4\" (UniqueName: \"kubernetes.io/projected/33840716-fb19-4f1b-9d04-b28168daf418-kube-api-access-4z2f4\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.753992 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33840716-fb19-4f1b-9d04-b28168daf418-serving-cert\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.754018 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-client-ca\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.754043 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-config\") pod \"route-controller-manager-7c8f45c5cb-vs58t\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.754063 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-proxy-ca-bundles\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.754092 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-client-ca\") pod \"route-controller-manager-7c8f45c5cb-vs58t\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.754109 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-serving-cert\") pod \"route-controller-manager-7c8f45c5cb-vs58t\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.754124 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz5ql\" (UniqueName: \"kubernetes.io/projected/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-kube-api-access-vz5ql\") pod \"route-controller-manager-7c8f45c5cb-vs58t\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.755197 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-client-ca\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.755616 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-proxy-ca-bundles\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.756045 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-config\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.760923 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.765546 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t"] Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.768573 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33840716-fb19-4f1b-9d04-b28168daf418-serving-cert\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.790919 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z2f4\" (UniqueName: \"kubernetes.io/projected/33840716-fb19-4f1b-9d04-b28168daf418-kube-api-access-4z2f4\") pod \"controller-manager-66b66c46c7-scgb2\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.821786 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t"] Feb 16 11:11:53 crc kubenswrapper[4797]: E0216 11:11:53.822138 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config kube-api-access-vz5ql serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" podUID="290c7bd2-8ab8-4ec2-9559-e8a891761c4a" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.855050 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-client-ca\") pod \"route-controller-manager-7c8f45c5cb-vs58t\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.855096 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-serving-cert\") pod \"route-controller-manager-7c8f45c5cb-vs58t\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.855119 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz5ql\" (UniqueName: \"kubernetes.io/projected/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-kube-api-access-vz5ql\") pod \"route-controller-manager-7c8f45c5cb-vs58t\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.855301 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-config\") pod \"route-controller-manager-7c8f45c5cb-vs58t\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.856003 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-client-ca\") pod \"route-controller-manager-7c8f45c5cb-vs58t\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.856327 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-config\") pod \"route-controller-manager-7c8f45c5cb-vs58t\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.861169 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-serving-cert\") pod \"route-controller-manager-7c8f45c5cb-vs58t\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.874091 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz5ql\" (UniqueName: \"kubernetes.io/projected/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-kube-api-access-vz5ql\") pod \"route-controller-manager-7c8f45c5cb-vs58t\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.988022 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b58dea-daa7-4b11-b6b9-c5a9471f1129" path="/var/lib/kubelet/pods/45b58dea-daa7-4b11-b6b9-c5a9471f1129/volumes" Feb 16 11:11:53 crc kubenswrapper[4797]: I0216 11:11:53.988515 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f2e6ac-38ac-41dd-b195-7fe50447270e" path="/var/lib/kubelet/pods/d2f2e6ac-38ac-41dd-b195-7fe50447270e/volumes" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.268518 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.268548 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.281004 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.289660 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.462479 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-client-ca\") pod \"33840716-fb19-4f1b-9d04-b28168daf418\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.462910 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz5ql\" (UniqueName: \"kubernetes.io/projected/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-kube-api-access-vz5ql\") pod \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.463036 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-proxy-ca-bundles\") pod \"33840716-fb19-4f1b-9d04-b28168daf418\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.463229 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z2f4\" (UniqueName: \"kubernetes.io/projected/33840716-fb19-4f1b-9d04-b28168daf418-kube-api-access-4z2f4\") pod \"33840716-fb19-4f1b-9d04-b28168daf418\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.463295 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-client-ca" (OuterVolumeSpecName: "client-ca") pod "33840716-fb19-4f1b-9d04-b28168daf418" (UID: "33840716-fb19-4f1b-9d04-b28168daf418"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.463438 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "33840716-fb19-4f1b-9d04-b28168daf418" (UID: "33840716-fb19-4f1b-9d04-b28168daf418"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.463612 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-serving-cert\") pod \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.463784 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-client-ca\") pod \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.463924 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-config\") pod \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\" (UID: \"290c7bd2-8ab8-4ec2-9559-e8a891761c4a\") " Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.464115 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-config\") pod \"33840716-fb19-4f1b-9d04-b28168daf418\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.464286 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33840716-fb19-4f1b-9d04-b28168daf418-serving-cert\") pod \"33840716-fb19-4f1b-9d04-b28168daf418\" (UID: \"33840716-fb19-4f1b-9d04-b28168daf418\") " Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.464812 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-config" (OuterVolumeSpecName: "config") pod "33840716-fb19-4f1b-9d04-b28168daf418" (UID: "33840716-fb19-4f1b-9d04-b28168daf418"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.464906 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-config" (OuterVolumeSpecName: "config") pod "290c7bd2-8ab8-4ec2-9559-e8a891761c4a" (UID: "290c7bd2-8ab8-4ec2-9559-e8a891761c4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.465121 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-client-ca" (OuterVolumeSpecName: "client-ca") pod "290c7bd2-8ab8-4ec2-9559-e8a891761c4a" (UID: "290c7bd2-8ab8-4ec2-9559-e8a891761c4a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.465310 4797 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.465445 4797 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.465564 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.465943 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33840716-fb19-4f1b-9d04-b28168daf418-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.466515 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33840716-fb19-4f1b-9d04-b28168daf418-kube-api-access-4z2f4" (OuterVolumeSpecName: "kube-api-access-4z2f4") pod "33840716-fb19-4f1b-9d04-b28168daf418" (UID: "33840716-fb19-4f1b-9d04-b28168daf418"). InnerVolumeSpecName "kube-api-access-4z2f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.466870 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33840716-fb19-4f1b-9d04-b28168daf418-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "33840716-fb19-4f1b-9d04-b28168daf418" (UID: "33840716-fb19-4f1b-9d04-b28168daf418"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.466976 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-kube-api-access-vz5ql" (OuterVolumeSpecName: "kube-api-access-vz5ql") pod "290c7bd2-8ab8-4ec2-9559-e8a891761c4a" (UID: "290c7bd2-8ab8-4ec2-9559-e8a891761c4a"). InnerVolumeSpecName "kube-api-access-vz5ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.467832 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "290c7bd2-8ab8-4ec2-9559-e8a891761c4a" (UID: "290c7bd2-8ab8-4ec2-9559-e8a891761c4a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.566748 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz5ql\" (UniqueName: \"kubernetes.io/projected/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-kube-api-access-vz5ql\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.566793 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z2f4\" (UniqueName: \"kubernetes.io/projected/33840716-fb19-4f1b-9d04-b28168daf418-kube-api-access-4z2f4\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.566815 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.566835 4797 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/290c7bd2-8ab8-4ec2-9559-e8a891761c4a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:54 crc kubenswrapper[4797]: I0216 11:11:54.566851 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33840716-fb19-4f1b-9d04-b28168daf418-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.272392 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b66c46c7-scgb2" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.272444 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.308922 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66b66c46c7-scgb2"] Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.310737 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57dc77797b-lfndj"] Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.311893 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.316855 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.317111 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.318160 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.318320 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.318437 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.318505 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.318943 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66b66c46c7-scgb2"] Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.346357 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.346755 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57dc77797b-lfndj"] Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.357741 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t"] Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.361722 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-vs58t"] Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.377262 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-config\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.377302 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-client-ca\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.377340 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-proxy-ca-bundles\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.377377 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhdkk\" (UniqueName: \"kubernetes.io/projected/ae6830e7-f012-4169-9805-ea02746d6d54-kube-api-access-jhdkk\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.377398 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae6830e7-f012-4169-9805-ea02746d6d54-serving-cert\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.478697 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhdkk\" (UniqueName: \"kubernetes.io/projected/ae6830e7-f012-4169-9805-ea02746d6d54-kube-api-access-jhdkk\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.478790 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae6830e7-f012-4169-9805-ea02746d6d54-serving-cert\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.478854 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-config\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.478875 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-client-ca\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.478903 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-proxy-ca-bundles\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.480448 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-config\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.480605 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-proxy-ca-bundles\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.481175 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-client-ca\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.492441 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae6830e7-f012-4169-9805-ea02746d6d54-serving-cert\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.496493 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhdkk\" (UniqueName: \"kubernetes.io/projected/ae6830e7-f012-4169-9805-ea02746d6d54-kube-api-access-jhdkk\") pod \"controller-manager-57dc77797b-lfndj\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.643341 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.882214 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57dc77797b-lfndj"] Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.992505 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="290c7bd2-8ab8-4ec2-9559-e8a891761c4a" path="/var/lib/kubelet/pods/290c7bd2-8ab8-4ec2-9559-e8a891761c4a/volumes" Feb 16 11:11:55 crc kubenswrapper[4797]: I0216 11:11:55.993221 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33840716-fb19-4f1b-9d04-b28168daf418" path="/var/lib/kubelet/pods/33840716-fb19-4f1b-9d04-b28168daf418/volumes" Feb 16 11:11:56 crc kubenswrapper[4797]: I0216 11:11:56.288247 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" event={"ID":"ae6830e7-f012-4169-9805-ea02746d6d54","Type":"ContainerStarted","Data":"b8ea44f1c85adcc5873915c2bb8b5b981dce6eb1847a558fbbb973b6cd8bded4"} Feb 16 11:11:56 crc kubenswrapper[4797]: I0216 11:11:56.289445 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" event={"ID":"ae6830e7-f012-4169-9805-ea02746d6d54","Type":"ContainerStarted","Data":"e436265d063ff1837cc779bab5f57a52efbe77c6a2ca77dc4b85d587861ecfa4"} Feb 16 11:11:56 crc kubenswrapper[4797]: I0216 11:11:56.289620 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:56 crc kubenswrapper[4797]: I0216 11:11:56.294359 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:11:56 crc kubenswrapper[4797]: I0216 11:11:56.306995 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" podStartSLOduration=3.306974637 podStartE2EDuration="3.306974637s" podCreationTimestamp="2026-02-16 11:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:11:56.301914366 +0000 UTC m=+311.022099346" watchObservedRunningTime="2026-02-16 11:11:56.306974637 +0000 UTC m=+311.027159617" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.499474 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx"] Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.500198 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.503273 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.503990 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.504184 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.504532 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.504766 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.504923 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.518184 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx"] Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.602506 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bc9b\" (UniqueName: \"kubernetes.io/projected/513ac8e1-1d93-4493-aed9-0b23036efcf4-kube-api-access-7bc9b\") pod \"route-controller-manager-6989754cc4-jmfrx\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.602793 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513ac8e1-1d93-4493-aed9-0b23036efcf4-config\") pod \"route-controller-manager-6989754cc4-jmfrx\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.602828 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/513ac8e1-1d93-4493-aed9-0b23036efcf4-client-ca\") pod \"route-controller-manager-6989754cc4-jmfrx\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.602851 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/513ac8e1-1d93-4493-aed9-0b23036efcf4-serving-cert\") pod \"route-controller-manager-6989754cc4-jmfrx\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.703886 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513ac8e1-1d93-4493-aed9-0b23036efcf4-config\") pod \"route-controller-manager-6989754cc4-jmfrx\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.703962 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/513ac8e1-1d93-4493-aed9-0b23036efcf4-client-ca\") pod \"route-controller-manager-6989754cc4-jmfrx\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.703990 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/513ac8e1-1d93-4493-aed9-0b23036efcf4-serving-cert\") pod \"route-controller-manager-6989754cc4-jmfrx\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.704082 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bc9b\" (UniqueName: \"kubernetes.io/projected/513ac8e1-1d93-4493-aed9-0b23036efcf4-kube-api-access-7bc9b\") pod \"route-controller-manager-6989754cc4-jmfrx\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.705287 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513ac8e1-1d93-4493-aed9-0b23036efcf4-config\") pod \"route-controller-manager-6989754cc4-jmfrx\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.705317 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/513ac8e1-1d93-4493-aed9-0b23036efcf4-client-ca\") pod \"route-controller-manager-6989754cc4-jmfrx\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.712606 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/513ac8e1-1d93-4493-aed9-0b23036efcf4-serving-cert\") pod \"route-controller-manager-6989754cc4-jmfrx\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.730450 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bc9b\" (UniqueName: \"kubernetes.io/projected/513ac8e1-1d93-4493-aed9-0b23036efcf4-kube-api-access-7bc9b\") pod \"route-controller-manager-6989754cc4-jmfrx\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:57 crc kubenswrapper[4797]: I0216 11:11:57.821525 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:58 crc kubenswrapper[4797]: I0216 11:11:58.000784 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx"] Feb 16 11:11:58 crc kubenswrapper[4797]: W0216 11:11:58.011256 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod513ac8e1_1d93_4493_aed9_0b23036efcf4.slice/crio-208f21a49dc516fc4f7abec64642b1338f18b316e0b881b08ed3fb5aabbada06 WatchSource:0}: Error finding container 208f21a49dc516fc4f7abec64642b1338f18b316e0b881b08ed3fb5aabbada06: Status 404 returned error can't find the container with id 208f21a49dc516fc4f7abec64642b1338f18b316e0b881b08ed3fb5aabbada06 Feb 16 11:11:58 crc kubenswrapper[4797]: I0216 11:11:58.300412 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" event={"ID":"513ac8e1-1d93-4493-aed9-0b23036efcf4","Type":"ContainerStarted","Data":"21775a6232fbf0072fe3edbef1f51261d6e6376a852bddf91f34d9b345451af1"} Feb 16 11:11:58 crc kubenswrapper[4797]: I0216 11:11:58.300849 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" event={"ID":"513ac8e1-1d93-4493-aed9-0b23036efcf4","Type":"ContainerStarted","Data":"208f21a49dc516fc4f7abec64642b1338f18b316e0b881b08ed3fb5aabbada06"} Feb 16 11:11:58 crc kubenswrapper[4797]: I0216 11:11:58.301021 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:11:58 crc kubenswrapper[4797]: I0216 11:11:58.329194 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" podStartSLOduration=5.32915978 podStartE2EDuration="5.32915978s" podCreationTimestamp="2026-02-16 11:11:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:11:58.326654441 +0000 UTC m=+313.046839501" watchObservedRunningTime="2026-02-16 11:11:58.32915978 +0000 UTC m=+313.049344800" Feb 16 11:11:58 crc kubenswrapper[4797]: I0216 11:11:58.533436 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:12:04 crc kubenswrapper[4797]: I0216 11:12:04.809325 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gnspr"] Feb 16 11:12:04 crc kubenswrapper[4797]: I0216 11:12:04.810729 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:04 crc kubenswrapper[4797]: I0216 11:12:04.814593 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 11:12:04 crc kubenswrapper[4797]: I0216 11:12:04.826251 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gnspr"] Feb 16 11:12:04 crc kubenswrapper[4797]: I0216 11:12:04.891657 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8785aee1-a170-4747-bb85-ddd8653c51d2-catalog-content\") pod \"community-operators-gnspr\" (UID: \"8785aee1-a170-4747-bb85-ddd8653c51d2\") " pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:04 crc kubenswrapper[4797]: I0216 11:12:04.891773 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtd4w\" (UniqueName: \"kubernetes.io/projected/8785aee1-a170-4747-bb85-ddd8653c51d2-kube-api-access-qtd4w\") pod \"community-operators-gnspr\" (UID: \"8785aee1-a170-4747-bb85-ddd8653c51d2\") " pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:04 crc kubenswrapper[4797]: I0216 11:12:04.891848 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8785aee1-a170-4747-bb85-ddd8653c51d2-utilities\") pod \"community-operators-gnspr\" (UID: \"8785aee1-a170-4747-bb85-ddd8653c51d2\") " pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:04 crc kubenswrapper[4797]: I0216 11:12:04.992421 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8785aee1-a170-4747-bb85-ddd8653c51d2-utilities\") pod \"community-operators-gnspr\" (UID: \"8785aee1-a170-4747-bb85-ddd8653c51d2\") " pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:04 crc kubenswrapper[4797]: I0216 11:12:04.992487 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8785aee1-a170-4747-bb85-ddd8653c51d2-catalog-content\") pod \"community-operators-gnspr\" (UID: \"8785aee1-a170-4747-bb85-ddd8653c51d2\") " pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:04 crc kubenswrapper[4797]: I0216 11:12:04.992689 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtd4w\" (UniqueName: \"kubernetes.io/projected/8785aee1-a170-4747-bb85-ddd8653c51d2-kube-api-access-qtd4w\") pod \"community-operators-gnspr\" (UID: \"8785aee1-a170-4747-bb85-ddd8653c51d2\") " pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:04 crc kubenswrapper[4797]: I0216 11:12:04.993015 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8785aee1-a170-4747-bb85-ddd8653c51d2-utilities\") pod \"community-operators-gnspr\" (UID: \"8785aee1-a170-4747-bb85-ddd8653c51d2\") " pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:04 crc kubenswrapper[4797]: I0216 11:12:04.993100 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8785aee1-a170-4747-bb85-ddd8653c51d2-catalog-content\") pod \"community-operators-gnspr\" (UID: \"8785aee1-a170-4747-bb85-ddd8653c51d2\") " pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.002160 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pjpsv"] Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.003364 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.006533 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.012782 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtd4w\" (UniqueName: \"kubernetes.io/projected/8785aee1-a170-4747-bb85-ddd8653c51d2-kube-api-access-qtd4w\") pod \"community-operators-gnspr\" (UID: \"8785aee1-a170-4747-bb85-ddd8653c51d2\") " pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.013533 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjpsv"] Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.127468 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.195457 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4698d1d1-b33e-4ede-bd45-ac6adf4d64a4-catalog-content\") pod \"redhat-marketplace-pjpsv\" (UID: \"4698d1d1-b33e-4ede-bd45-ac6adf4d64a4\") " pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.195696 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhfgx\" (UniqueName: \"kubernetes.io/projected/4698d1d1-b33e-4ede-bd45-ac6adf4d64a4-kube-api-access-lhfgx\") pod \"redhat-marketplace-pjpsv\" (UID: \"4698d1d1-b33e-4ede-bd45-ac6adf4d64a4\") " pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.195751 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4698d1d1-b33e-4ede-bd45-ac6adf4d64a4-utilities\") pod \"redhat-marketplace-pjpsv\" (UID: \"4698d1d1-b33e-4ede-bd45-ac6adf4d64a4\") " pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.296436 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4698d1d1-b33e-4ede-bd45-ac6adf4d64a4-catalog-content\") pod \"redhat-marketplace-pjpsv\" (UID: \"4698d1d1-b33e-4ede-bd45-ac6adf4d64a4\") " pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.296490 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhfgx\" (UniqueName: \"kubernetes.io/projected/4698d1d1-b33e-4ede-bd45-ac6adf4d64a4-kube-api-access-lhfgx\") pod \"redhat-marketplace-pjpsv\" (UID: \"4698d1d1-b33e-4ede-bd45-ac6adf4d64a4\") " pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.296523 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4698d1d1-b33e-4ede-bd45-ac6adf4d64a4-utilities\") pod \"redhat-marketplace-pjpsv\" (UID: \"4698d1d1-b33e-4ede-bd45-ac6adf4d64a4\") " pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.296921 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4698d1d1-b33e-4ede-bd45-ac6adf4d64a4-utilities\") pod \"redhat-marketplace-pjpsv\" (UID: \"4698d1d1-b33e-4ede-bd45-ac6adf4d64a4\") " pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.296982 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4698d1d1-b33e-4ede-bd45-ac6adf4d64a4-catalog-content\") pod \"redhat-marketplace-pjpsv\" (UID: \"4698d1d1-b33e-4ede-bd45-ac6adf4d64a4\") " pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.317215 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhfgx\" (UniqueName: \"kubernetes.io/projected/4698d1d1-b33e-4ede-bd45-ac6adf4d64a4-kube-api-access-lhfgx\") pod \"redhat-marketplace-pjpsv\" (UID: \"4698d1d1-b33e-4ede-bd45-ac6adf4d64a4\") " pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.330723 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.524444 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gnspr"] Feb 16 11:12:05 crc kubenswrapper[4797]: I0216 11:12:05.779202 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pjpsv"] Feb 16 11:12:05 crc kubenswrapper[4797]: W0216 11:12:05.791909 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4698d1d1_b33e_4ede_bd45_ac6adf4d64a4.slice/crio-bb25dfb36336ba75ca72627b2cebccc4c2295f55aa314eedd9df960460eaabb2 WatchSource:0}: Error finding container bb25dfb36336ba75ca72627b2cebccc4c2295f55aa314eedd9df960460eaabb2: Status 404 returned error can't find the container with id bb25dfb36336ba75ca72627b2cebccc4c2295f55aa314eedd9df960460eaabb2 Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.346938 4797 generic.go:334] "Generic (PLEG): container finished" podID="8785aee1-a170-4747-bb85-ddd8653c51d2" containerID="d557d13c4ff308ba021ea76c67554a2adb733c4b2837bc2ff2387c002c04a6aa" exitCode=0 Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.347042 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnspr" event={"ID":"8785aee1-a170-4747-bb85-ddd8653c51d2","Type":"ContainerDied","Data":"d557d13c4ff308ba021ea76c67554a2adb733c4b2837bc2ff2387c002c04a6aa"} Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.347073 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnspr" event={"ID":"8785aee1-a170-4747-bb85-ddd8653c51d2","Type":"ContainerStarted","Data":"08fd89dca7cc28c559339a9093c994c2713e81140ba13d2daf10364352b9701b"} Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.350239 4797 generic.go:334] "Generic (PLEG): container finished" podID="4698d1d1-b33e-4ede-bd45-ac6adf4d64a4" containerID="d8b3f06230258db2d2cd3603e5e7d89a836c6c12c5052520e8abff7207a6a745" exitCode=0 Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.350271 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjpsv" event={"ID":"4698d1d1-b33e-4ede-bd45-ac6adf4d64a4","Type":"ContainerDied","Data":"d8b3f06230258db2d2cd3603e5e7d89a836c6c12c5052520e8abff7207a6a745"} Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.350294 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjpsv" event={"ID":"4698d1d1-b33e-4ede-bd45-ac6adf4d64a4","Type":"ContainerStarted","Data":"bb25dfb36336ba75ca72627b2cebccc4c2295f55aa314eedd9df960460eaabb2"} Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.409499 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pwk2z"] Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.410846 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.413622 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.430651 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwk2z"] Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.512056 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0db1d294-337b-4051-b922-c7c3270426f2-utilities\") pod \"redhat-operators-pwk2z\" (UID: \"0db1d294-337b-4051-b922-c7c3270426f2\") " pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.512293 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0db1d294-337b-4051-b922-c7c3270426f2-catalog-content\") pod \"redhat-operators-pwk2z\" (UID: \"0db1d294-337b-4051-b922-c7c3270426f2\") " pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.512476 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jqg5\" (UniqueName: \"kubernetes.io/projected/0db1d294-337b-4051-b922-c7c3270426f2-kube-api-access-7jqg5\") pod \"redhat-operators-pwk2z\" (UID: \"0db1d294-337b-4051-b922-c7c3270426f2\") " pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.613789 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jqg5\" (UniqueName: \"kubernetes.io/projected/0db1d294-337b-4051-b922-c7c3270426f2-kube-api-access-7jqg5\") pod \"redhat-operators-pwk2z\" (UID: \"0db1d294-337b-4051-b922-c7c3270426f2\") " pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.613874 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0db1d294-337b-4051-b922-c7c3270426f2-utilities\") pod \"redhat-operators-pwk2z\" (UID: \"0db1d294-337b-4051-b922-c7c3270426f2\") " pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.613964 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0db1d294-337b-4051-b922-c7c3270426f2-catalog-content\") pod \"redhat-operators-pwk2z\" (UID: \"0db1d294-337b-4051-b922-c7c3270426f2\") " pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.614926 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0db1d294-337b-4051-b922-c7c3270426f2-catalog-content\") pod \"redhat-operators-pwk2z\" (UID: \"0db1d294-337b-4051-b922-c7c3270426f2\") " pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.615296 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0db1d294-337b-4051-b922-c7c3270426f2-utilities\") pod \"redhat-operators-pwk2z\" (UID: \"0db1d294-337b-4051-b922-c7c3270426f2\") " pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.641987 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jqg5\" (UniqueName: \"kubernetes.io/projected/0db1d294-337b-4051-b922-c7c3270426f2-kube-api-access-7jqg5\") pod \"redhat-operators-pwk2z\" (UID: \"0db1d294-337b-4051-b922-c7c3270426f2\") " pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:06 crc kubenswrapper[4797]: I0216 11:12:06.738250 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:07 crc kubenswrapper[4797]: I0216 11:12:07.137290 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwk2z"] Feb 16 11:12:07 crc kubenswrapper[4797]: W0216 11:12:07.143547 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0db1d294_337b_4051_b922_c7c3270426f2.slice/crio-559de2bfa1506a8caec87e2f5473b853f4ad7f460120f7f1e3df5ab11f76328b WatchSource:0}: Error finding container 559de2bfa1506a8caec87e2f5473b853f4ad7f460120f7f1e3df5ab11f76328b: Status 404 returned error can't find the container with id 559de2bfa1506a8caec87e2f5473b853f4ad7f460120f7f1e3df5ab11f76328b Feb 16 11:12:07 crc kubenswrapper[4797]: I0216 11:12:07.357069 4797 generic.go:334] "Generic (PLEG): container finished" podID="4698d1d1-b33e-4ede-bd45-ac6adf4d64a4" containerID="02717429844cc38d08bceb94d5e7f54e1a3b3ec8c49a21ccf97706667f687fdf" exitCode=0 Feb 16 11:12:07 crc kubenswrapper[4797]: I0216 11:12:07.357133 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjpsv" event={"ID":"4698d1d1-b33e-4ede-bd45-ac6adf4d64a4","Type":"ContainerDied","Data":"02717429844cc38d08bceb94d5e7f54e1a3b3ec8c49a21ccf97706667f687fdf"} Feb 16 11:12:07 crc kubenswrapper[4797]: I0216 11:12:07.358784 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnspr" event={"ID":"8785aee1-a170-4747-bb85-ddd8653c51d2","Type":"ContainerStarted","Data":"89c473e5f400ad5330a86bf0c480ca60501236fa8500da385bbdf0b66d52f964"} Feb 16 11:12:07 crc kubenswrapper[4797]: I0216 11:12:07.360309 4797 generic.go:334] "Generic (PLEG): container finished" podID="0db1d294-337b-4051-b922-c7c3270426f2" containerID="3c049e0abebb444d6703ec3108e05a5a458ffd49319c7e5751025cd3e3ab7c7b" exitCode=0 Feb 16 11:12:07 crc kubenswrapper[4797]: I0216 11:12:07.360351 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwk2z" event={"ID":"0db1d294-337b-4051-b922-c7c3270426f2","Type":"ContainerDied","Data":"3c049e0abebb444d6703ec3108e05a5a458ffd49319c7e5751025cd3e3ab7c7b"} Feb 16 11:12:07 crc kubenswrapper[4797]: I0216 11:12:07.360375 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwk2z" event={"ID":"0db1d294-337b-4051-b922-c7c3270426f2","Type":"ContainerStarted","Data":"559de2bfa1506a8caec87e2f5473b853f4ad7f460120f7f1e3df5ab11f76328b"} Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.219449 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8fw4k"] Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.221896 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.222350 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8fw4k"] Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.224637 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.235244 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29a8611-590f-4899-aab6-1c60031e24ad-utilities\") pod \"certified-operators-8fw4k\" (UID: \"b29a8611-590f-4899-aab6-1c60031e24ad\") " pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.235630 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29a8611-590f-4899-aab6-1c60031e24ad-catalog-content\") pod \"certified-operators-8fw4k\" (UID: \"b29a8611-590f-4899-aab6-1c60031e24ad\") " pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.235836 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqcbq\" (UniqueName: \"kubernetes.io/projected/b29a8611-590f-4899-aab6-1c60031e24ad-kube-api-access-dqcbq\") pod \"certified-operators-8fw4k\" (UID: \"b29a8611-590f-4899-aab6-1c60031e24ad\") " pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.336533 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqcbq\" (UniqueName: \"kubernetes.io/projected/b29a8611-590f-4899-aab6-1c60031e24ad-kube-api-access-dqcbq\") pod \"certified-operators-8fw4k\" (UID: \"b29a8611-590f-4899-aab6-1c60031e24ad\") " pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.336846 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29a8611-590f-4899-aab6-1c60031e24ad-utilities\") pod \"certified-operators-8fw4k\" (UID: \"b29a8611-590f-4899-aab6-1c60031e24ad\") " pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.336997 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29a8611-590f-4899-aab6-1c60031e24ad-catalog-content\") pod \"certified-operators-8fw4k\" (UID: \"b29a8611-590f-4899-aab6-1c60031e24ad\") " pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.337461 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b29a8611-590f-4899-aab6-1c60031e24ad-utilities\") pod \"certified-operators-8fw4k\" (UID: \"b29a8611-590f-4899-aab6-1c60031e24ad\") " pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.337474 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b29a8611-590f-4899-aab6-1c60031e24ad-catalog-content\") pod \"certified-operators-8fw4k\" (UID: \"b29a8611-590f-4899-aab6-1c60031e24ad\") " pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.366819 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqcbq\" (UniqueName: \"kubernetes.io/projected/b29a8611-590f-4899-aab6-1c60031e24ad-kube-api-access-dqcbq\") pod \"certified-operators-8fw4k\" (UID: \"b29a8611-590f-4899-aab6-1c60031e24ad\") " pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.367055 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pjpsv" event={"ID":"4698d1d1-b33e-4ede-bd45-ac6adf4d64a4","Type":"ContainerStarted","Data":"58aa88db0dcdfb10106cc604188ea199cdb0adb6351e1e5551968dde64f7c997"} Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.368977 4797 generic.go:334] "Generic (PLEG): container finished" podID="8785aee1-a170-4747-bb85-ddd8653c51d2" containerID="89c473e5f400ad5330a86bf0c480ca60501236fa8500da385bbdf0b66d52f964" exitCode=0 Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.369020 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnspr" event={"ID":"8785aee1-a170-4747-bb85-ddd8653c51d2","Type":"ContainerDied","Data":"89c473e5f400ad5330a86bf0c480ca60501236fa8500da385bbdf0b66d52f964"} Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.372627 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwk2z" event={"ID":"0db1d294-337b-4051-b922-c7c3270426f2","Type":"ContainerStarted","Data":"21d49fe1b2c48641883a4039f5b161815fde15fc4a28f7dc9de1584cb38d0781"} Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.388254 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pjpsv" podStartSLOduration=2.953227613 podStartE2EDuration="4.388239201s" podCreationTimestamp="2026-02-16 11:12:04 +0000 UTC" firstStartedPulling="2026-02-16 11:12:06.353068928 +0000 UTC m=+321.073253908" lastFinishedPulling="2026-02-16 11:12:07.788080516 +0000 UTC m=+322.508265496" observedRunningTime="2026-02-16 11:12:08.38712773 +0000 UTC m=+323.107312710" watchObservedRunningTime="2026-02-16 11:12:08.388239201 +0000 UTC m=+323.108424181" Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.545883 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:08 crc kubenswrapper[4797]: I0216 11:12:08.966867 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8fw4k"] Feb 16 11:12:09 crc kubenswrapper[4797]: I0216 11:12:09.379410 4797 generic.go:334] "Generic (PLEG): container finished" podID="0db1d294-337b-4051-b922-c7c3270426f2" containerID="21d49fe1b2c48641883a4039f5b161815fde15fc4a28f7dc9de1584cb38d0781" exitCode=0 Feb 16 11:12:09 crc kubenswrapper[4797]: I0216 11:12:09.379454 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwk2z" event={"ID":"0db1d294-337b-4051-b922-c7c3270426f2","Type":"ContainerDied","Data":"21d49fe1b2c48641883a4039f5b161815fde15fc4a28f7dc9de1584cb38d0781"} Feb 16 11:12:09 crc kubenswrapper[4797]: I0216 11:12:09.382807 4797 generic.go:334] "Generic (PLEG): container finished" podID="b29a8611-590f-4899-aab6-1c60031e24ad" containerID="5a3f2b9d9581863d96167971e4dc7bf79f53f0ffec26776962979725c7adc095" exitCode=0 Feb 16 11:12:09 crc kubenswrapper[4797]: I0216 11:12:09.382864 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fw4k" event={"ID":"b29a8611-590f-4899-aab6-1c60031e24ad","Type":"ContainerDied","Data":"5a3f2b9d9581863d96167971e4dc7bf79f53f0ffec26776962979725c7adc095"} Feb 16 11:12:09 crc kubenswrapper[4797]: I0216 11:12:09.384233 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fw4k" event={"ID":"b29a8611-590f-4899-aab6-1c60031e24ad","Type":"ContainerStarted","Data":"ac19d6278c923fbbb391e3e6f03ec585112eebce7cd353dea2a93b68077e5e92"} Feb 16 11:12:09 crc kubenswrapper[4797]: I0216 11:12:09.391179 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gnspr" event={"ID":"8785aee1-a170-4747-bb85-ddd8653c51d2","Type":"ContainerStarted","Data":"ba9e2a80b8cd915ee675b5bfcc195b66dc3cf294cc03fbe690ba4bd7cf0dfd2b"} Feb 16 11:12:09 crc kubenswrapper[4797]: I0216 11:12:09.418038 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gnspr" podStartSLOduration=2.981620859 podStartE2EDuration="5.418024445s" podCreationTimestamp="2026-02-16 11:12:04 +0000 UTC" firstStartedPulling="2026-02-16 11:12:06.351480454 +0000 UTC m=+321.071665444" lastFinishedPulling="2026-02-16 11:12:08.78788405 +0000 UTC m=+323.508069030" observedRunningTime="2026-02-16 11:12:09.416481743 +0000 UTC m=+324.136666723" watchObservedRunningTime="2026-02-16 11:12:09.418024445 +0000 UTC m=+324.138209415" Feb 16 11:12:11 crc kubenswrapper[4797]: I0216 11:12:11.404050 4797 generic.go:334] "Generic (PLEG): container finished" podID="b29a8611-590f-4899-aab6-1c60031e24ad" containerID="1062fbd54cdda5f88edf9cfc4103d4e715f83305894aa16921f4a88c6fd518e1" exitCode=0 Feb 16 11:12:11 crc kubenswrapper[4797]: I0216 11:12:11.404093 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fw4k" event={"ID":"b29a8611-590f-4899-aab6-1c60031e24ad","Type":"ContainerDied","Data":"1062fbd54cdda5f88edf9cfc4103d4e715f83305894aa16921f4a88c6fd518e1"} Feb 16 11:12:11 crc kubenswrapper[4797]: I0216 11:12:11.408190 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwk2z" event={"ID":"0db1d294-337b-4051-b922-c7c3270426f2","Type":"ContainerStarted","Data":"63d07223fa995bdf666ec4320e55260e6edf9cdd08173978857b9ccc70248a14"} Feb 16 11:12:11 crc kubenswrapper[4797]: I0216 11:12:11.470725 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pwk2z" podStartSLOduration=2.488608241 podStartE2EDuration="5.470710683s" podCreationTimestamp="2026-02-16 11:12:06 +0000 UTC" firstStartedPulling="2026-02-16 11:12:07.362742735 +0000 UTC m=+322.082927715" lastFinishedPulling="2026-02-16 11:12:10.344845167 +0000 UTC m=+325.065030157" observedRunningTime="2026-02-16 11:12:11.468612935 +0000 UTC m=+326.188797925" watchObservedRunningTime="2026-02-16 11:12:11.470710683 +0000 UTC m=+326.190895663" Feb 16 11:12:11 crc kubenswrapper[4797]: I0216 11:12:11.703649 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:12:11 crc kubenswrapper[4797]: I0216 11:12:11.703742 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:12:12 crc kubenswrapper[4797]: I0216 11:12:12.412318 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57dc77797b-lfndj"] Feb 16 11:12:12 crc kubenswrapper[4797]: I0216 11:12:12.412849 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" podUID="ae6830e7-f012-4169-9805-ea02746d6d54" containerName="controller-manager" containerID="cri-o://b8ea44f1c85adcc5873915c2bb8b5b981dce6eb1847a558fbbb973b6cd8bded4" gracePeriod=30 Feb 16 11:12:12 crc kubenswrapper[4797]: I0216 11:12:12.416883 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx"] Feb 16 11:12:12 crc kubenswrapper[4797]: I0216 11:12:12.417106 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" podUID="513ac8e1-1d93-4493-aed9-0b23036efcf4" containerName="route-controller-manager" containerID="cri-o://21775a6232fbf0072fe3edbef1f51261d6e6376a852bddf91f34d9b345451af1" gracePeriod=30 Feb 16 11:12:12 crc kubenswrapper[4797]: I0216 11:12:12.421863 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fw4k" event={"ID":"b29a8611-590f-4899-aab6-1c60031e24ad","Type":"ContainerStarted","Data":"dfa3113fc863c3db1c6d4299b007b753803e45df07f57184805447e5731c7fde"} Feb 16 11:12:12 crc kubenswrapper[4797]: I0216 11:12:12.445160 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8fw4k" podStartSLOduration=1.7879089000000001 podStartE2EDuration="4.445142434s" podCreationTimestamp="2026-02-16 11:12:08 +0000 UTC" firstStartedPulling="2026-02-16 11:12:09.384061134 +0000 UTC m=+324.104246104" lastFinishedPulling="2026-02-16 11:12:12.041294658 +0000 UTC m=+326.761479638" observedRunningTime="2026-02-16 11:12:12.443355004 +0000 UTC m=+327.163540004" watchObservedRunningTime="2026-02-16 11:12:12.445142434 +0000 UTC m=+327.165327424" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.427835 4797 generic.go:334] "Generic (PLEG): container finished" podID="ae6830e7-f012-4169-9805-ea02746d6d54" containerID="b8ea44f1c85adcc5873915c2bb8b5b981dce6eb1847a558fbbb973b6cd8bded4" exitCode=0 Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.427942 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" event={"ID":"ae6830e7-f012-4169-9805-ea02746d6d54","Type":"ContainerDied","Data":"b8ea44f1c85adcc5873915c2bb8b5b981dce6eb1847a558fbbb973b6cd8bded4"} Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.429388 4797 generic.go:334] "Generic (PLEG): container finished" podID="513ac8e1-1d93-4493-aed9-0b23036efcf4" containerID="21775a6232fbf0072fe3edbef1f51261d6e6376a852bddf91f34d9b345451af1" exitCode=0 Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.429472 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" event={"ID":"513ac8e1-1d93-4493-aed9-0b23036efcf4","Type":"ContainerDied","Data":"21775a6232fbf0072fe3edbef1f51261d6e6376a852bddf91f34d9b345451af1"} Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.453465 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.500790 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2"] Feb 16 11:12:13 crc kubenswrapper[4797]: E0216 11:12:13.501073 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="513ac8e1-1d93-4493-aed9-0b23036efcf4" containerName="route-controller-manager" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.501096 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="513ac8e1-1d93-4493-aed9-0b23036efcf4" containerName="route-controller-manager" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.501242 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="513ac8e1-1d93-4493-aed9-0b23036efcf4" containerName="route-controller-manager" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.501713 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.508292 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2"] Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.595695 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.606097 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bc9b\" (UniqueName: \"kubernetes.io/projected/513ac8e1-1d93-4493-aed9-0b23036efcf4-kube-api-access-7bc9b\") pod \"513ac8e1-1d93-4493-aed9-0b23036efcf4\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.606173 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/513ac8e1-1d93-4493-aed9-0b23036efcf4-serving-cert\") pod \"513ac8e1-1d93-4493-aed9-0b23036efcf4\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.606216 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513ac8e1-1d93-4493-aed9-0b23036efcf4-config\") pod \"513ac8e1-1d93-4493-aed9-0b23036efcf4\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.606256 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/513ac8e1-1d93-4493-aed9-0b23036efcf4-client-ca\") pod \"513ac8e1-1d93-4493-aed9-0b23036efcf4\" (UID: \"513ac8e1-1d93-4493-aed9-0b23036efcf4\") " Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.606637 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae6830e7-f012-4169-9805-ea02746d6d54-serving-cert\") pod \"ae6830e7-f012-4169-9805-ea02746d6d54\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.606804 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08a7a271-a248-4764-91b5-38f842bf6579-config\") pod \"route-controller-manager-7c8f45c5cb-jb5c2\" (UID: \"08a7a271-a248-4764-91b5-38f842bf6579\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.606882 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd5k2\" (UniqueName: \"kubernetes.io/projected/08a7a271-a248-4764-91b5-38f842bf6579-kube-api-access-qd5k2\") pod \"route-controller-manager-7c8f45c5cb-jb5c2\" (UID: \"08a7a271-a248-4764-91b5-38f842bf6579\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.606946 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08a7a271-a248-4764-91b5-38f842bf6579-serving-cert\") pod \"route-controller-manager-7c8f45c5cb-jb5c2\" (UID: \"08a7a271-a248-4764-91b5-38f842bf6579\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.606975 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08a7a271-a248-4764-91b5-38f842bf6579-client-ca\") pod \"route-controller-manager-7c8f45c5cb-jb5c2\" (UID: \"08a7a271-a248-4764-91b5-38f842bf6579\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.607091 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513ac8e1-1d93-4493-aed9-0b23036efcf4-client-ca" (OuterVolumeSpecName: "client-ca") pod "513ac8e1-1d93-4493-aed9-0b23036efcf4" (UID: "513ac8e1-1d93-4493-aed9-0b23036efcf4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.607172 4797 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/513ac8e1-1d93-4493-aed9-0b23036efcf4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.607192 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/513ac8e1-1d93-4493-aed9-0b23036efcf4-config" (OuterVolumeSpecName: "config") pod "513ac8e1-1d93-4493-aed9-0b23036efcf4" (UID: "513ac8e1-1d93-4493-aed9-0b23036efcf4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.612614 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/513ac8e1-1d93-4493-aed9-0b23036efcf4-kube-api-access-7bc9b" (OuterVolumeSpecName: "kube-api-access-7bc9b") pod "513ac8e1-1d93-4493-aed9-0b23036efcf4" (UID: "513ac8e1-1d93-4493-aed9-0b23036efcf4"). InnerVolumeSpecName "kube-api-access-7bc9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.612958 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae6830e7-f012-4169-9805-ea02746d6d54-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ae6830e7-f012-4169-9805-ea02746d6d54" (UID: "ae6830e7-f012-4169-9805-ea02746d6d54"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.666319 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/513ac8e1-1d93-4493-aed9-0b23036efcf4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "513ac8e1-1d93-4493-aed9-0b23036efcf4" (UID: "513ac8e1-1d93-4493-aed9-0b23036efcf4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.707423 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-client-ca\") pod \"ae6830e7-f012-4169-9805-ea02746d6d54\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.707467 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-proxy-ca-bundles\") pod \"ae6830e7-f012-4169-9805-ea02746d6d54\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.707508 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhdkk\" (UniqueName: \"kubernetes.io/projected/ae6830e7-f012-4169-9805-ea02746d6d54-kube-api-access-jhdkk\") pod \"ae6830e7-f012-4169-9805-ea02746d6d54\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.707562 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-config\") pod \"ae6830e7-f012-4169-9805-ea02746d6d54\" (UID: \"ae6830e7-f012-4169-9805-ea02746d6d54\") " Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.707688 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd5k2\" (UniqueName: \"kubernetes.io/projected/08a7a271-a248-4764-91b5-38f842bf6579-kube-api-access-qd5k2\") pod \"route-controller-manager-7c8f45c5cb-jb5c2\" (UID: \"08a7a271-a248-4764-91b5-38f842bf6579\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.707727 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08a7a271-a248-4764-91b5-38f842bf6579-serving-cert\") pod \"route-controller-manager-7c8f45c5cb-jb5c2\" (UID: \"08a7a271-a248-4764-91b5-38f842bf6579\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.707754 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08a7a271-a248-4764-91b5-38f842bf6579-client-ca\") pod \"route-controller-manager-7c8f45c5cb-jb5c2\" (UID: \"08a7a271-a248-4764-91b5-38f842bf6579\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.707814 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08a7a271-a248-4764-91b5-38f842bf6579-config\") pod \"route-controller-manager-7c8f45c5cb-jb5c2\" (UID: \"08a7a271-a248-4764-91b5-38f842bf6579\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.707889 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae6830e7-f012-4169-9805-ea02746d6d54-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.707904 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bc9b\" (UniqueName: \"kubernetes.io/projected/513ac8e1-1d93-4493-aed9-0b23036efcf4-kube-api-access-7bc9b\") on node \"crc\" DevicePath \"\"" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.707916 4797 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/513ac8e1-1d93-4493-aed9-0b23036efcf4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.707928 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/513ac8e1-1d93-4493-aed9-0b23036efcf4-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.708678 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-config" (OuterVolumeSpecName: "config") pod "ae6830e7-f012-4169-9805-ea02746d6d54" (UID: "ae6830e7-f012-4169-9805-ea02746d6d54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.708965 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-client-ca" (OuterVolumeSpecName: "client-ca") pod "ae6830e7-f012-4169-9805-ea02746d6d54" (UID: "ae6830e7-f012-4169-9805-ea02746d6d54"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.708971 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08a7a271-a248-4764-91b5-38f842bf6579-config\") pod \"route-controller-manager-7c8f45c5cb-jb5c2\" (UID: \"08a7a271-a248-4764-91b5-38f842bf6579\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.709529 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ae6830e7-f012-4169-9805-ea02746d6d54" (UID: "ae6830e7-f012-4169-9805-ea02746d6d54"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.710246 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08a7a271-a248-4764-91b5-38f842bf6579-client-ca\") pod \"route-controller-manager-7c8f45c5cb-jb5c2\" (UID: \"08a7a271-a248-4764-91b5-38f842bf6579\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.712152 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08a7a271-a248-4764-91b5-38f842bf6579-serving-cert\") pod \"route-controller-manager-7c8f45c5cb-jb5c2\" (UID: \"08a7a271-a248-4764-91b5-38f842bf6579\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.712268 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae6830e7-f012-4169-9805-ea02746d6d54-kube-api-access-jhdkk" (OuterVolumeSpecName: "kube-api-access-jhdkk") pod "ae6830e7-f012-4169-9805-ea02746d6d54" (UID: "ae6830e7-f012-4169-9805-ea02746d6d54"). InnerVolumeSpecName "kube-api-access-jhdkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.727493 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd5k2\" (UniqueName: \"kubernetes.io/projected/08a7a271-a248-4764-91b5-38f842bf6579-kube-api-access-qd5k2\") pod \"route-controller-manager-7c8f45c5cb-jb5c2\" (UID: \"08a7a271-a248-4764-91b5-38f842bf6579\") " pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.808741 4797 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.808787 4797 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.808801 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhdkk\" (UniqueName: \"kubernetes.io/projected/ae6830e7-f012-4169-9805-ea02746d6d54-kube-api-access-jhdkk\") on node \"crc\" DevicePath \"\"" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.808814 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae6830e7-f012-4169-9805-ea02746d6d54-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:12:13 crc kubenswrapper[4797]: I0216 11:12:13.817260 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.023429 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2"] Feb 16 11:12:14 crc kubenswrapper[4797]: W0216 11:12:14.025948 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08a7a271_a248_4764_91b5_38f842bf6579.slice/crio-f06c33278b36ec95d5b0b735ba2d5cc90c5489b6f68fd3bce51c4ad1d9bacaa0 WatchSource:0}: Error finding container f06c33278b36ec95d5b0b735ba2d5cc90c5489b6f68fd3bce51c4ad1d9bacaa0: Status 404 returned error can't find the container with id f06c33278b36ec95d5b0b735ba2d5cc90c5489b6f68fd3bce51c4ad1d9bacaa0 Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.436441 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.436460 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx" event={"ID":"513ac8e1-1d93-4493-aed9-0b23036efcf4","Type":"ContainerDied","Data":"208f21a49dc516fc4f7abec64642b1338f18b316e0b881b08ed3fb5aabbada06"} Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.436835 4797 scope.go:117] "RemoveContainer" containerID="21775a6232fbf0072fe3edbef1f51261d6e6376a852bddf91f34d9b345451af1" Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.438368 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" event={"ID":"08a7a271-a248-4764-91b5-38f842bf6579","Type":"ContainerStarted","Data":"1ce367f9ad1d245e962a6daebf2f25897c7c7346c9b3d4447713a9b6b0eca782"} Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.438426 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" event={"ID":"08a7a271-a248-4764-91b5-38f842bf6579","Type":"ContainerStarted","Data":"f06c33278b36ec95d5b0b735ba2d5cc90c5489b6f68fd3bce51c4ad1d9bacaa0"} Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.438680 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.441922 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" event={"ID":"ae6830e7-f012-4169-9805-ea02746d6d54","Type":"ContainerDied","Data":"e436265d063ff1837cc779bab5f57a52efbe77c6a2ca77dc4b85d587861ecfa4"} Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.441958 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57dc77797b-lfndj" Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.456300 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx"] Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.462022 4797 scope.go:117] "RemoveContainer" containerID="b8ea44f1c85adcc5873915c2bb8b5b981dce6eb1847a558fbbb973b6cd8bded4" Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.464890 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6989754cc4-jmfrx"] Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.478857 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" podStartSLOduration=2.478837365 podStartE2EDuration="2.478837365s" podCreationTimestamp="2026-02-16 11:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:12:14.476264613 +0000 UTC m=+329.196449633" watchObservedRunningTime="2026-02-16 11:12:14.478837365 +0000 UTC m=+329.199022345" Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.498266 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57dc77797b-lfndj"] Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.502990 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-57dc77797b-lfndj"] Feb 16 11:12:14 crc kubenswrapper[4797]: I0216 11:12:14.554504 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c8f45c5cb-jb5c2" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.127975 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.128018 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.176037 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.331633 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.331694 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.373787 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.489344 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pjpsv" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.491782 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gnspr" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.517383 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77b64756d8-w262z"] Feb 16 11:12:15 crc kubenswrapper[4797]: E0216 11:12:15.517667 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae6830e7-f012-4169-9805-ea02746d6d54" containerName="controller-manager" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.517688 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae6830e7-f012-4169-9805-ea02746d6d54" containerName="controller-manager" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.517808 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae6830e7-f012-4169-9805-ea02746d6d54" containerName="controller-manager" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.518273 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.527503 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.527561 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13096206-bf25-4558-8843-c344c60b5dec-client-ca\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.527651 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13096206-bf25-4558-8843-c344c60b5dec-proxy-ca-bundles\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.527513 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.527672 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13096206-bf25-4558-8843-c344c60b5dec-config\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.527741 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13096206-bf25-4558-8843-c344c60b5dec-serving-cert\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.528041 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.528240 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5krvc\" (UniqueName: \"kubernetes.io/projected/13096206-bf25-4558-8843-c344c60b5dec-kube-api-access-5krvc\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.528361 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.528703 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.528993 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.532370 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.537721 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77b64756d8-w262z"] Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.629310 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13096206-bf25-4558-8843-c344c60b5dec-client-ca\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.629361 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13096206-bf25-4558-8843-c344c60b5dec-proxy-ca-bundles\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.629391 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13096206-bf25-4558-8843-c344c60b5dec-config\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.629451 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13096206-bf25-4558-8843-c344c60b5dec-serving-cert\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.629507 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5krvc\" (UniqueName: \"kubernetes.io/projected/13096206-bf25-4558-8843-c344c60b5dec-kube-api-access-5krvc\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.630686 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/13096206-bf25-4558-8843-c344c60b5dec-proxy-ca-bundles\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.630964 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13096206-bf25-4558-8843-c344c60b5dec-config\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.631789 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13096206-bf25-4558-8843-c344c60b5dec-client-ca\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.640597 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13096206-bf25-4558-8843-c344c60b5dec-serving-cert\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.649187 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5krvc\" (UniqueName: \"kubernetes.io/projected/13096206-bf25-4558-8843-c344c60b5dec-kube-api-access-5krvc\") pod \"controller-manager-77b64756d8-w262z\" (UID: \"13096206-bf25-4558-8843-c344c60b5dec\") " pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:15 crc kubenswrapper[4797]: I0216 11:12:15.844962 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:16 crc kubenswrapper[4797]: I0216 11:12:16.003965 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="513ac8e1-1d93-4493-aed9-0b23036efcf4" path="/var/lib/kubelet/pods/513ac8e1-1d93-4493-aed9-0b23036efcf4/volumes" Feb 16 11:12:16 crc kubenswrapper[4797]: I0216 11:12:16.004620 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae6830e7-f012-4169-9805-ea02746d6d54" path="/var/lib/kubelet/pods/ae6830e7-f012-4169-9805-ea02746d6d54/volumes" Feb 16 11:12:16 crc kubenswrapper[4797]: I0216 11:12:16.145031 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77b64756d8-w262z"] Feb 16 11:12:16 crc kubenswrapper[4797]: I0216 11:12:16.455682 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" event={"ID":"13096206-bf25-4558-8843-c344c60b5dec","Type":"ContainerStarted","Data":"6faf461ff2788cee75677a619e3cbe8245189e860a0770df4fe856be7afe34fb"} Feb 16 11:12:16 crc kubenswrapper[4797]: I0216 11:12:16.456021 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" event={"ID":"13096206-bf25-4558-8843-c344c60b5dec","Type":"ContainerStarted","Data":"c510b60e9c07bd15cac862d4b4219a22440a62d1bda57c91a9857cc80cb1b921"} Feb 16 11:12:16 crc kubenswrapper[4797]: I0216 11:12:16.470296 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" podStartSLOduration=4.470257679 podStartE2EDuration="4.470257679s" podCreationTimestamp="2026-02-16 11:12:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:12:16.469030139 +0000 UTC m=+331.189215129" watchObservedRunningTime="2026-02-16 11:12:16.470257679 +0000 UTC m=+331.190442659" Feb 16 11:12:16 crc kubenswrapper[4797]: I0216 11:12:16.739391 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:16 crc kubenswrapper[4797]: I0216 11:12:16.739430 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:17 crc kubenswrapper[4797]: I0216 11:12:17.459784 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:17 crc kubenswrapper[4797]: I0216 11:12:17.466199 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77b64756d8-w262z" Feb 16 11:12:17 crc kubenswrapper[4797]: I0216 11:12:17.779391 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pwk2z" podUID="0db1d294-337b-4051-b922-c7c3270426f2" containerName="registry-server" probeResult="failure" output=< Feb 16 11:12:17 crc kubenswrapper[4797]: timeout: failed to connect service ":50051" within 1s Feb 16 11:12:17 crc kubenswrapper[4797]: > Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.546342 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.546410 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.596658 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.864742 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m4vnw"] Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.865908 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.881752 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-bound-sa-token\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.881825 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.882001 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-registry-tls\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.882082 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md5zs\" (UniqueName: \"kubernetes.io/projected/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-kube-api-access-md5zs\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.882126 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-trusted-ca\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.882209 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-registry-certificates\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.882297 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.882344 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.884265 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m4vnw"] Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.918880 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.983210 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-bound-sa-token\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.983269 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.983306 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-registry-tls\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.983331 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md5zs\" (UniqueName: \"kubernetes.io/projected/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-kube-api-access-md5zs\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.983352 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-trusted-ca\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.983374 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-registry-certificates\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.983398 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.983902 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-ca-trust-extracted\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.984447 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-trusted-ca\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.984686 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-registry-certificates\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.992237 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-installation-pull-secrets\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:18 crc kubenswrapper[4797]: I0216 11:12:18.992261 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-registry-tls\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:19 crc kubenswrapper[4797]: I0216 11:12:19.016641 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md5zs\" (UniqueName: \"kubernetes.io/projected/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-kube-api-access-md5zs\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:19 crc kubenswrapper[4797]: I0216 11:12:19.016683 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e5430968-6f59-4eb3-a8ce-1aaa26a7eabc-bound-sa-token\") pod \"image-registry-66df7c8f76-m4vnw\" (UID: \"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc\") " pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:19 crc kubenswrapper[4797]: I0216 11:12:19.188917 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:19 crc kubenswrapper[4797]: I0216 11:12:19.517702 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8fw4k" Feb 16 11:12:19 crc kubenswrapper[4797]: I0216 11:12:19.576916 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-m4vnw"] Feb 16 11:12:19 crc kubenswrapper[4797]: W0216 11:12:19.582834 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode5430968_6f59_4eb3_a8ce_1aaa26a7eabc.slice/crio-873afcac05e291e5901727b1308fea72f73f55ad8726fbf01545accb27a920e5 WatchSource:0}: Error finding container 873afcac05e291e5901727b1308fea72f73f55ad8726fbf01545accb27a920e5: Status 404 returned error can't find the container with id 873afcac05e291e5901727b1308fea72f73f55ad8726fbf01545accb27a920e5 Feb 16 11:12:20 crc kubenswrapper[4797]: I0216 11:12:20.484561 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" event={"ID":"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc","Type":"ContainerStarted","Data":"5127ce2ab8a33036135d33ff83450f379056935fc2965e88abf8dc5cb7fabfd1"} Feb 16 11:12:20 crc kubenswrapper[4797]: I0216 11:12:20.484988 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:20 crc kubenswrapper[4797]: I0216 11:12:20.485015 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" event={"ID":"e5430968-6f59-4eb3-a8ce-1aaa26a7eabc","Type":"ContainerStarted","Data":"873afcac05e291e5901727b1308fea72f73f55ad8726fbf01545accb27a920e5"} Feb 16 11:12:20 crc kubenswrapper[4797]: I0216 11:12:20.511078 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" podStartSLOduration=2.511057289 podStartE2EDuration="2.511057289s" podCreationTimestamp="2026-02-16 11:12:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:12:20.511051799 +0000 UTC m=+335.231236819" watchObservedRunningTime="2026-02-16 11:12:20.511057289 +0000 UTC m=+335.231242279" Feb 16 11:12:26 crc kubenswrapper[4797]: I0216 11:12:26.777100 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:26 crc kubenswrapper[4797]: I0216 11:12:26.816760 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pwk2z" Feb 16 11:12:39 crc kubenswrapper[4797]: I0216 11:12:39.193948 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-m4vnw" Feb 16 11:12:39 crc kubenswrapper[4797]: I0216 11:12:39.281180 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ckmh7"] Feb 16 11:12:41 crc kubenswrapper[4797]: I0216 11:12:41.703775 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:12:41 crc kubenswrapper[4797]: I0216 11:12:41.704377 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.322286 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" podUID="d97ef757-b33f-4c9d-9a9b-758cf73ce40e" containerName="registry" containerID="cri-o://397f5299cdf6f15525015ec7a79f804c1afd854e4e995b11b45e29575ea463dc" gracePeriod=30 Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.753827 4797 generic.go:334] "Generic (PLEG): container finished" podID="d97ef757-b33f-4c9d-9a9b-758cf73ce40e" containerID="397f5299cdf6f15525015ec7a79f804c1afd854e4e995b11b45e29575ea463dc" exitCode=0 Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.753935 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" event={"ID":"d97ef757-b33f-4c9d-9a9b-758cf73ce40e","Type":"ContainerDied","Data":"397f5299cdf6f15525015ec7a79f804c1afd854e4e995b11b45e29575ea463dc"} Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.754162 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" event={"ID":"d97ef757-b33f-4c9d-9a9b-758cf73ce40e","Type":"ContainerDied","Data":"936f718bf104533277eba9fedb4d585b6baf3d4b75c71a35189dde329390f2fa"} Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.754181 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="936f718bf104533277eba9fedb4d585b6baf3d4b75c71a35189dde329390f2fa" Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.764004 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.928944 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-registry-tls\") pod \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.929066 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7kwz\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-kube-api-access-c7kwz\") pod \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.929175 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-installation-pull-secrets\") pod \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.929262 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-trusted-ca\") pod \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.929537 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.929606 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-bound-sa-token\") pod \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.929649 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-ca-trust-extracted\") pod \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.929721 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-registry-certificates\") pod \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\" (UID: \"d97ef757-b33f-4c9d-9a9b-758cf73ce40e\") " Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.932729 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d97ef757-b33f-4c9d-9a9b-758cf73ce40e" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.936039 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d97ef757-b33f-4c9d-9a9b-758cf73ce40e" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.939714 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d97ef757-b33f-4c9d-9a9b-758cf73ce40e" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.940845 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-kube-api-access-c7kwz" (OuterVolumeSpecName: "kube-api-access-c7kwz") pod "d97ef757-b33f-4c9d-9a9b-758cf73ce40e" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e"). InnerVolumeSpecName "kube-api-access-c7kwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.941291 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d97ef757-b33f-4c9d-9a9b-758cf73ce40e" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.941361 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d97ef757-b33f-4c9d-9a9b-758cf73ce40e" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.951889 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d97ef757-b33f-4c9d-9a9b-758cf73ce40e" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 11:13:04 crc kubenswrapper[4797]: I0216 11:13:04.958514 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d97ef757-b33f-4c9d-9a9b-758cf73ce40e" (UID: "d97ef757-b33f-4c9d-9a9b-758cf73ce40e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:13:05 crc kubenswrapper[4797]: I0216 11:13:05.031669 4797 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:13:05 crc kubenswrapper[4797]: I0216 11:13:05.031707 4797 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 11:13:05 crc kubenswrapper[4797]: I0216 11:13:05.031721 4797 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 11:13:05 crc kubenswrapper[4797]: I0216 11:13:05.031734 4797 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 11:13:05 crc kubenswrapper[4797]: I0216 11:13:05.031749 4797 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 11:13:05 crc kubenswrapper[4797]: I0216 11:13:05.031761 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7kwz\" (UniqueName: \"kubernetes.io/projected/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-kube-api-access-c7kwz\") on node \"crc\" DevicePath \"\"" Feb 16 11:13:05 crc kubenswrapper[4797]: I0216 11:13:05.031772 4797 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d97ef757-b33f-4c9d-9a9b-758cf73ce40e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 11:13:05 crc kubenswrapper[4797]: I0216 11:13:05.759658 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ckmh7" Feb 16 11:13:05 crc kubenswrapper[4797]: I0216 11:13:05.795673 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ckmh7"] Feb 16 11:13:05 crc kubenswrapper[4797]: I0216 11:13:05.797394 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ckmh7"] Feb 16 11:13:05 crc kubenswrapper[4797]: I0216 11:13:05.997756 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d97ef757-b33f-4c9d-9a9b-758cf73ce40e" path="/var/lib/kubelet/pods/d97ef757-b33f-4c9d-9a9b-758cf73ce40e/volumes" Feb 16 11:13:11 crc kubenswrapper[4797]: I0216 11:13:11.704083 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:13:11 crc kubenswrapper[4797]: I0216 11:13:11.704654 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:13:11 crc kubenswrapper[4797]: I0216 11:13:11.704702 4797 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:13:11 crc kubenswrapper[4797]: I0216 11:13:11.705734 4797 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af84a89245e5aaf7fe1b2839496582f7da8d713bc2c59a78f68c5a3db5e3f13c"} pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:13:11 crc kubenswrapper[4797]: I0216 11:13:11.705802 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" containerID="cri-o://af84a89245e5aaf7fe1b2839496582f7da8d713bc2c59a78f68c5a3db5e3f13c" gracePeriod=600 Feb 16 11:13:12 crc kubenswrapper[4797]: I0216 11:13:12.800280 4797 generic.go:334] "Generic (PLEG): container finished" podID="128f4e85-fd17-4281-97d2-872fda792b21" containerID="af84a89245e5aaf7fe1b2839496582f7da8d713bc2c59a78f68c5a3db5e3f13c" exitCode=0 Feb 16 11:13:12 crc kubenswrapper[4797]: I0216 11:13:12.801334 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerDied","Data":"af84a89245e5aaf7fe1b2839496582f7da8d713bc2c59a78f68c5a3db5e3f13c"} Feb 16 11:13:12 crc kubenswrapper[4797]: I0216 11:13:12.801446 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"21e93e8275d66acf9ae0c0ae90f1582953ed0ef2b28e1d32c012b6372bb207c7"} Feb 16 11:13:12 crc kubenswrapper[4797]: I0216 11:13:12.801482 4797 scope.go:117] "RemoveContainer" containerID="ed83cc5f2184b8151b03a59f26051458d51e01c9279033682d6f1bcab7e0cef5" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.178317 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k"] Feb 16 11:15:00 crc kubenswrapper[4797]: E0216 11:15:00.179254 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d97ef757-b33f-4c9d-9a9b-758cf73ce40e" containerName="registry" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.179272 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d97ef757-b33f-4c9d-9a9b-758cf73ce40e" containerName="registry" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.179417 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="d97ef757-b33f-4c9d-9a9b-758cf73ce40e" containerName="registry" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.179891 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.183681 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.183717 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.191747 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k"] Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.321201 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75fdce59-c937-4565-b49a-1668d2504c37-secret-volume\") pod \"collect-profiles-29520675-wfk5k\" (UID: \"75fdce59-c937-4565-b49a-1668d2504c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.321274 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pqrk\" (UniqueName: \"kubernetes.io/projected/75fdce59-c937-4565-b49a-1668d2504c37-kube-api-access-9pqrk\") pod \"collect-profiles-29520675-wfk5k\" (UID: \"75fdce59-c937-4565-b49a-1668d2504c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.321297 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75fdce59-c937-4565-b49a-1668d2504c37-config-volume\") pod \"collect-profiles-29520675-wfk5k\" (UID: \"75fdce59-c937-4565-b49a-1668d2504c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.422808 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pqrk\" (UniqueName: \"kubernetes.io/projected/75fdce59-c937-4565-b49a-1668d2504c37-kube-api-access-9pqrk\") pod \"collect-profiles-29520675-wfk5k\" (UID: \"75fdce59-c937-4565-b49a-1668d2504c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.422865 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75fdce59-c937-4565-b49a-1668d2504c37-config-volume\") pod \"collect-profiles-29520675-wfk5k\" (UID: \"75fdce59-c937-4565-b49a-1668d2504c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.422930 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75fdce59-c937-4565-b49a-1668d2504c37-secret-volume\") pod \"collect-profiles-29520675-wfk5k\" (UID: \"75fdce59-c937-4565-b49a-1668d2504c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.424244 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75fdce59-c937-4565-b49a-1668d2504c37-config-volume\") pod \"collect-profiles-29520675-wfk5k\" (UID: \"75fdce59-c937-4565-b49a-1668d2504c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.428943 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75fdce59-c937-4565-b49a-1668d2504c37-secret-volume\") pod \"collect-profiles-29520675-wfk5k\" (UID: \"75fdce59-c937-4565-b49a-1668d2504c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.437847 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pqrk\" (UniqueName: \"kubernetes.io/projected/75fdce59-c937-4565-b49a-1668d2504c37-kube-api-access-9pqrk\") pod \"collect-profiles-29520675-wfk5k\" (UID: \"75fdce59-c937-4565-b49a-1668d2504c37\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.504003 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" Feb 16 11:15:00 crc kubenswrapper[4797]: I0216 11:15:00.698877 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k"] Feb 16 11:15:01 crc kubenswrapper[4797]: I0216 11:15:01.593553 4797 generic.go:334] "Generic (PLEG): container finished" podID="75fdce59-c937-4565-b49a-1668d2504c37" containerID="048fefc750681fb0c59c2b58f45632528e46964cc9146e8ae6cf000c3a699230" exitCode=0 Feb 16 11:15:01 crc kubenswrapper[4797]: I0216 11:15:01.593656 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" event={"ID":"75fdce59-c937-4565-b49a-1668d2504c37","Type":"ContainerDied","Data":"048fefc750681fb0c59c2b58f45632528e46964cc9146e8ae6cf000c3a699230"} Feb 16 11:15:01 crc kubenswrapper[4797]: I0216 11:15:01.593917 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" event={"ID":"75fdce59-c937-4565-b49a-1668d2504c37","Type":"ContainerStarted","Data":"ceaa497e9fd73335f7aa3b4a0d8d957ef125510f4b26f3a00c646edb2d21656e"} Feb 16 11:15:02 crc kubenswrapper[4797]: I0216 11:15:02.818354 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" Feb 16 11:15:02 crc kubenswrapper[4797]: I0216 11:15:02.954636 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75fdce59-c937-4565-b49a-1668d2504c37-secret-volume\") pod \"75fdce59-c937-4565-b49a-1668d2504c37\" (UID: \"75fdce59-c937-4565-b49a-1668d2504c37\") " Feb 16 11:15:02 crc kubenswrapper[4797]: I0216 11:15:02.954683 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pqrk\" (UniqueName: \"kubernetes.io/projected/75fdce59-c937-4565-b49a-1668d2504c37-kube-api-access-9pqrk\") pod \"75fdce59-c937-4565-b49a-1668d2504c37\" (UID: \"75fdce59-c937-4565-b49a-1668d2504c37\") " Feb 16 11:15:02 crc kubenswrapper[4797]: I0216 11:15:02.954748 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75fdce59-c937-4565-b49a-1668d2504c37-config-volume\") pod \"75fdce59-c937-4565-b49a-1668d2504c37\" (UID: \"75fdce59-c937-4565-b49a-1668d2504c37\") " Feb 16 11:15:02 crc kubenswrapper[4797]: I0216 11:15:02.955435 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75fdce59-c937-4565-b49a-1668d2504c37-config-volume" (OuterVolumeSpecName: "config-volume") pod "75fdce59-c937-4565-b49a-1668d2504c37" (UID: "75fdce59-c937-4565-b49a-1668d2504c37"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:15:02 crc kubenswrapper[4797]: I0216 11:15:02.960240 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75fdce59-c937-4565-b49a-1668d2504c37-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75fdce59-c937-4565-b49a-1668d2504c37" (UID: "75fdce59-c937-4565-b49a-1668d2504c37"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:15:02 crc kubenswrapper[4797]: I0216 11:15:02.960538 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75fdce59-c937-4565-b49a-1668d2504c37-kube-api-access-9pqrk" (OuterVolumeSpecName: "kube-api-access-9pqrk") pod "75fdce59-c937-4565-b49a-1668d2504c37" (UID: "75fdce59-c937-4565-b49a-1668d2504c37"). InnerVolumeSpecName "kube-api-access-9pqrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:15:03 crc kubenswrapper[4797]: I0216 11:15:03.056436 4797 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75fdce59-c937-4565-b49a-1668d2504c37-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 11:15:03 crc kubenswrapper[4797]: I0216 11:15:03.056489 4797 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75fdce59-c937-4565-b49a-1668d2504c37-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 11:15:03 crc kubenswrapper[4797]: I0216 11:15:03.056499 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pqrk\" (UniqueName: \"kubernetes.io/projected/75fdce59-c937-4565-b49a-1668d2504c37-kube-api-access-9pqrk\") on node \"crc\" DevicePath \"\"" Feb 16 11:15:03 crc kubenswrapper[4797]: I0216 11:15:03.606148 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" event={"ID":"75fdce59-c937-4565-b49a-1668d2504c37","Type":"ContainerDied","Data":"ceaa497e9fd73335f7aa3b4a0d8d957ef125510f4b26f3a00c646edb2d21656e"} Feb 16 11:15:03 crc kubenswrapper[4797]: I0216 11:15:03.606409 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceaa497e9fd73335f7aa3b4a0d8d957ef125510f4b26f3a00c646edb2d21656e" Feb 16 11:15:03 crc kubenswrapper[4797]: I0216 11:15:03.606202 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k" Feb 16 11:15:41 crc kubenswrapper[4797]: I0216 11:15:41.703384 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:15:41 crc kubenswrapper[4797]: I0216 11:15:41.704075 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:15:46 crc kubenswrapper[4797]: I0216 11:15:46.211656 4797 scope.go:117] "RemoveContainer" containerID="397f5299cdf6f15525015ec7a79f804c1afd854e4e995b11b45e29575ea463dc" Feb 16 11:16:11 crc kubenswrapper[4797]: I0216 11:16:11.703146 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:16:11 crc kubenswrapper[4797]: I0216 11:16:11.703911 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:16:41 crc kubenswrapper[4797]: I0216 11:16:41.703536 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:16:41 crc kubenswrapper[4797]: I0216 11:16:41.705194 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:16:41 crc kubenswrapper[4797]: I0216 11:16:41.705330 4797 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:16:41 crc kubenswrapper[4797]: I0216 11:16:41.706263 4797 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21e93e8275d66acf9ae0c0ae90f1582953ed0ef2b28e1d32c012b6372bb207c7"} pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:16:41 crc kubenswrapper[4797]: I0216 11:16:41.706418 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" containerID="cri-o://21e93e8275d66acf9ae0c0ae90f1582953ed0ef2b28e1d32c012b6372bb207c7" gracePeriod=600 Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.169415 4797 generic.go:334] "Generic (PLEG): container finished" podID="128f4e85-fd17-4281-97d2-872fda792b21" containerID="21e93e8275d66acf9ae0c0ae90f1582953ed0ef2b28e1d32c012b6372bb207c7" exitCode=0 Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.169625 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerDied","Data":"21e93e8275d66acf9ae0c0ae90f1582953ed0ef2b28e1d32c012b6372bb207c7"} Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.169787 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"e7af1c89447ff7ab76e09ca5508cebe1098d580ac409a9bf112a6d6541596109"} Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.169812 4797 scope.go:117] "RemoveContainer" containerID="af84a89245e5aaf7fe1b2839496582f7da8d713bc2c59a78f68c5a3db5e3f13c" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.460204 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn"] Feb 16 11:16:42 crc kubenswrapper[4797]: E0216 11:16:42.460606 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75fdce59-c937-4565-b49a-1668d2504c37" containerName="collect-profiles" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.460679 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="75fdce59-c937-4565-b49a-1668d2504c37" containerName="collect-profiles" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.460844 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="75fdce59-c937-4565-b49a-1668d2504c37" containerName="collect-profiles" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.461555 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.463769 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.476622 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn"] Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.507510 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cglmd\" (UniqueName: \"kubernetes.io/projected/2375197b-bcee-4713-841d-26bf583e7502-kube-api-access-cglmd\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn\" (UID: \"2375197b-bcee-4713-841d-26bf583e7502\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.507636 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2375197b-bcee-4713-841d-26bf583e7502-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn\" (UID: \"2375197b-bcee-4713-841d-26bf583e7502\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.507679 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2375197b-bcee-4713-841d-26bf583e7502-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn\" (UID: \"2375197b-bcee-4713-841d-26bf583e7502\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.609270 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cglmd\" (UniqueName: \"kubernetes.io/projected/2375197b-bcee-4713-841d-26bf583e7502-kube-api-access-cglmd\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn\" (UID: \"2375197b-bcee-4713-841d-26bf583e7502\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.609329 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2375197b-bcee-4713-841d-26bf583e7502-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn\" (UID: \"2375197b-bcee-4713-841d-26bf583e7502\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.609371 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2375197b-bcee-4713-841d-26bf583e7502-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn\" (UID: \"2375197b-bcee-4713-841d-26bf583e7502\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.609848 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2375197b-bcee-4713-841d-26bf583e7502-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn\" (UID: \"2375197b-bcee-4713-841d-26bf583e7502\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.609877 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2375197b-bcee-4713-841d-26bf583e7502-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn\" (UID: \"2375197b-bcee-4713-841d-26bf583e7502\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.628475 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cglmd\" (UniqueName: \"kubernetes.io/projected/2375197b-bcee-4713-841d-26bf583e7502-kube-api-access-cglmd\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn\" (UID: \"2375197b-bcee-4713-841d-26bf583e7502\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.780856 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" Feb 16 11:16:42 crc kubenswrapper[4797]: I0216 11:16:42.965265 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn"] Feb 16 11:16:43 crc kubenswrapper[4797]: I0216 11:16:43.176344 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" event={"ID":"2375197b-bcee-4713-841d-26bf583e7502","Type":"ContainerStarted","Data":"7143d2a4b3ccb38c233c4cb1837b329d50b7b2d194215bb478d8be594031a620"} Feb 16 11:16:44 crc kubenswrapper[4797]: I0216 11:16:44.188413 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" event={"ID":"2375197b-bcee-4713-841d-26bf583e7502","Type":"ContainerStarted","Data":"c218460918ea4028a86e763f214583741e8b3952b08ea5410322af0787354fbd"} Feb 16 11:16:45 crc kubenswrapper[4797]: I0216 11:16:45.196677 4797 generic.go:334] "Generic (PLEG): container finished" podID="2375197b-bcee-4713-841d-26bf583e7502" containerID="c218460918ea4028a86e763f214583741e8b3952b08ea5410322af0787354fbd" exitCode=0 Feb 16 11:16:45 crc kubenswrapper[4797]: I0216 11:16:45.196775 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" event={"ID":"2375197b-bcee-4713-841d-26bf583e7502","Type":"ContainerDied","Data":"c218460918ea4028a86e763f214583741e8b3952b08ea5410322af0787354fbd"} Feb 16 11:16:45 crc kubenswrapper[4797]: I0216 11:16:45.199525 4797 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 11:16:47 crc kubenswrapper[4797]: I0216 11:16:47.210564 4797 generic.go:334] "Generic (PLEG): container finished" podID="2375197b-bcee-4713-841d-26bf583e7502" containerID="5cd0fc6b472b2be545b3ddff65dfb01892e39445c77a853da6cc1af95116808f" exitCode=0 Feb 16 11:16:47 crc kubenswrapper[4797]: I0216 11:16:47.210797 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" event={"ID":"2375197b-bcee-4713-841d-26bf583e7502","Type":"ContainerDied","Data":"5cd0fc6b472b2be545b3ddff65dfb01892e39445c77a853da6cc1af95116808f"} Feb 16 11:16:48 crc kubenswrapper[4797]: I0216 11:16:48.218527 4797 generic.go:334] "Generic (PLEG): container finished" podID="2375197b-bcee-4713-841d-26bf583e7502" containerID="28632727227ea7bbedbb743c84e391b2b701da57206944f0d48385edf03e55f6" exitCode=0 Feb 16 11:16:48 crc kubenswrapper[4797]: I0216 11:16:48.218612 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" event={"ID":"2375197b-bcee-4713-841d-26bf583e7502","Type":"ContainerDied","Data":"28632727227ea7bbedbb743c84e391b2b701da57206944f0d48385edf03e55f6"} Feb 16 11:16:49 crc kubenswrapper[4797]: I0216 11:16:49.424491 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" Feb 16 11:16:49 crc kubenswrapper[4797]: I0216 11:16:49.510615 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2375197b-bcee-4713-841d-26bf583e7502-util\") pod \"2375197b-bcee-4713-841d-26bf583e7502\" (UID: \"2375197b-bcee-4713-841d-26bf583e7502\") " Feb 16 11:16:49 crc kubenswrapper[4797]: I0216 11:16:49.510684 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2375197b-bcee-4713-841d-26bf583e7502-bundle\") pod \"2375197b-bcee-4713-841d-26bf583e7502\" (UID: \"2375197b-bcee-4713-841d-26bf583e7502\") " Feb 16 11:16:49 crc kubenswrapper[4797]: I0216 11:16:49.510727 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cglmd\" (UniqueName: \"kubernetes.io/projected/2375197b-bcee-4713-841d-26bf583e7502-kube-api-access-cglmd\") pod \"2375197b-bcee-4713-841d-26bf583e7502\" (UID: \"2375197b-bcee-4713-841d-26bf583e7502\") " Feb 16 11:16:49 crc kubenswrapper[4797]: I0216 11:16:49.512927 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2375197b-bcee-4713-841d-26bf583e7502-bundle" (OuterVolumeSpecName: "bundle") pod "2375197b-bcee-4713-841d-26bf583e7502" (UID: "2375197b-bcee-4713-841d-26bf583e7502"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:16:49 crc kubenswrapper[4797]: I0216 11:16:49.518966 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2375197b-bcee-4713-841d-26bf583e7502-kube-api-access-cglmd" (OuterVolumeSpecName: "kube-api-access-cglmd") pod "2375197b-bcee-4713-841d-26bf583e7502" (UID: "2375197b-bcee-4713-841d-26bf583e7502"). InnerVolumeSpecName "kube-api-access-cglmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:16:49 crc kubenswrapper[4797]: I0216 11:16:49.522897 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2375197b-bcee-4713-841d-26bf583e7502-util" (OuterVolumeSpecName: "util") pod "2375197b-bcee-4713-841d-26bf583e7502" (UID: "2375197b-bcee-4713-841d-26bf583e7502"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:16:49 crc kubenswrapper[4797]: I0216 11:16:49.612293 4797 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2375197b-bcee-4713-841d-26bf583e7502-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:49 crc kubenswrapper[4797]: I0216 11:16:49.612336 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cglmd\" (UniqueName: \"kubernetes.io/projected/2375197b-bcee-4713-841d-26bf583e7502-kube-api-access-cglmd\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:49 crc kubenswrapper[4797]: I0216 11:16:49.612362 4797 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2375197b-bcee-4713-841d-26bf583e7502-util\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:50 crc kubenswrapper[4797]: I0216 11:16:50.233492 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" event={"ID":"2375197b-bcee-4713-841d-26bf583e7502","Type":"ContainerDied","Data":"7143d2a4b3ccb38c233c4cb1837b329d50b7b2d194215bb478d8be594031a620"} Feb 16 11:16:50 crc kubenswrapper[4797]: I0216 11:16:50.233534 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7143d2a4b3ccb38c233c4cb1837b329d50b7b2d194215bb478d8be594031a620" Feb 16 11:16:50 crc kubenswrapper[4797]: I0216 11:16:50.233626 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn" Feb 16 11:16:53 crc kubenswrapper[4797]: I0216 11:16:53.832139 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h9hsp"] Feb 16 11:16:53 crc kubenswrapper[4797]: I0216 11:16:53.833039 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovn-controller" containerID="cri-o://2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f" gracePeriod=30 Feb 16 11:16:53 crc kubenswrapper[4797]: I0216 11:16:53.833557 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="sbdb" containerID="cri-o://8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869" gracePeriod=30 Feb 16 11:16:53 crc kubenswrapper[4797]: I0216 11:16:53.833676 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="nbdb" containerID="cri-o://219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a" gracePeriod=30 Feb 16 11:16:53 crc kubenswrapper[4797]: I0216 11:16:53.833743 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="northd" containerID="cri-o://02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7" gracePeriod=30 Feb 16 11:16:53 crc kubenswrapper[4797]: I0216 11:16:53.833796 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e" gracePeriod=30 Feb 16 11:16:53 crc kubenswrapper[4797]: I0216 11:16:53.833846 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="kube-rbac-proxy-node" containerID="cri-o://d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217" gracePeriod=30 Feb 16 11:16:53 crc kubenswrapper[4797]: I0216 11:16:53.833905 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovn-acl-logging" containerID="cri-o://3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be" gracePeriod=30 Feb 16 11:16:53 crc kubenswrapper[4797]: I0216 11:16:53.875536 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" containerID="cri-o://9b639213eee10103d5dd443502e1ef8136381ee923d36ae9608b41bc0a1b2954" gracePeriod=30 Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.254945 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovnkube-controller/3.log" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.256569 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovn-acl-logging/0.log" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.256944 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovn-controller/0.log" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257222 4797 generic.go:334] "Generic (PLEG): container finished" podID="812f1f08-469d-44f4-907e-60ad61837364" containerID="9b639213eee10103d5dd443502e1ef8136381ee923d36ae9608b41bc0a1b2954" exitCode=0 Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257245 4797 generic.go:334] "Generic (PLEG): container finished" podID="812f1f08-469d-44f4-907e-60ad61837364" containerID="8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869" exitCode=0 Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257253 4797 generic.go:334] "Generic (PLEG): container finished" podID="812f1f08-469d-44f4-907e-60ad61837364" containerID="219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a" exitCode=0 Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257259 4797 generic.go:334] "Generic (PLEG): container finished" podID="812f1f08-469d-44f4-907e-60ad61837364" containerID="02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7" exitCode=0 Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257265 4797 generic.go:334] "Generic (PLEG): container finished" podID="812f1f08-469d-44f4-907e-60ad61837364" containerID="cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e" exitCode=0 Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257271 4797 generic.go:334] "Generic (PLEG): container finished" podID="812f1f08-469d-44f4-907e-60ad61837364" containerID="d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217" exitCode=0 Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257278 4797 generic.go:334] "Generic (PLEG): container finished" podID="812f1f08-469d-44f4-907e-60ad61837364" containerID="3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be" exitCode=143 Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257286 4797 generic.go:334] "Generic (PLEG): container finished" podID="812f1f08-469d-44f4-907e-60ad61837364" containerID="2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f" exitCode=143 Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257332 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"9b639213eee10103d5dd443502e1ef8136381ee923d36ae9608b41bc0a1b2954"} Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257367 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869"} Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257381 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a"} Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257392 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7"} Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257404 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e"} Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257414 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217"} Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257425 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be"} Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257437 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f"} Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.257455 4797 scope.go:117] "RemoveContainer" containerID="092dbcf0e49fbf3cc900cdcc2c16987f5c84253f01fd9fd773929bd9376bcb9b" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.260253 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5qvbt_9532a098-7e41-454c-af48-44f9a9478d12/kube-multus/2.log" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.260627 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5qvbt_9532a098-7e41-454c-af48-44f9a9478d12/kube-multus/1.log" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.260653 4797 generic.go:334] "Generic (PLEG): container finished" podID="9532a098-7e41-454c-af48-44f9a9478d12" containerID="75bb8a48b7bbd354f63efb913901a9ba447a87a652655d54697b2c03365b4699" exitCode=2 Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.260671 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5qvbt" event={"ID":"9532a098-7e41-454c-af48-44f9a9478d12","Type":"ContainerDied","Data":"75bb8a48b7bbd354f63efb913901a9ba447a87a652655d54697b2c03365b4699"} Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.261105 4797 scope.go:117] "RemoveContainer" containerID="75bb8a48b7bbd354f63efb913901a9ba447a87a652655d54697b2c03365b4699" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.261395 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5qvbt_openshift-multus(9532a098-7e41-454c-af48-44f9a9478d12)\"" pod="openshift-multus/multus-5qvbt" podUID="9532a098-7e41-454c-af48-44f9a9478d12" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.305513 4797 scope.go:117] "RemoveContainer" containerID="add78f37ddde7d8aaedb5783128c8f7f19f74ffe6ab10f54c85be98d5ec3bcbc" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.517177 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovn-acl-logging/0.log" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.517661 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovn-controller/0.log" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.518106 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.579371 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-etc-openvswitch\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.579678 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/812f1f08-469d-44f4-907e-60ad61837364-ovn-node-metrics-cert\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.579770 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-ovnkube-config\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.579877 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-ovnkube-script-lib\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.579538 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.580261 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.580493 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.580664 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-run-ovn-kubernetes\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.580839 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-openvswitch\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.581003 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-systemd\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.581408 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-systemd-units\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.581594 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-run-netns\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.581690 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv4sj\" (UniqueName: \"kubernetes.io/projected/812f1f08-469d-44f4-907e-60ad61837364-kube-api-access-mv4sj\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.581779 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-var-lib-cni-networks-ovn-kubernetes\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.581868 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-log-socket\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.581952 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-var-lib-openvswitch\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.582041 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-cni-bin\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.582122 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-slash\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.582193 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-ovn\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.580782 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.580958 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.581518 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.581729 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.582264 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-log-socket" (OuterVolumeSpecName: "log-socket") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.582362 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.582282 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-cni-netd\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.582427 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-node-log\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.582458 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-env-overrides\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.582483 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-kubelet\") pod \"812f1f08-469d-44f4-907e-60ad61837364\" (UID: \"812f1f08-469d-44f4-907e-60ad61837364\") " Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.582855 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.582891 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.582904 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-node-log" (OuterVolumeSpecName: "node-log") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.583040 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.583116 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.583154 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.583178 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.583210 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-slash" (OuterVolumeSpecName: "host-slash") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.582871 4797 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.583428 4797 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.583499 4797 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.583595 4797 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.583677 4797 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.583741 4797 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.583798 4797 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.583849 4797 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.583899 4797 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.587245 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812f1f08-469d-44f4-907e-60ad61837364-kube-api-access-mv4sj" (OuterVolumeSpecName: "kube-api-access-mv4sj") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "kube-api-access-mv4sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.598811 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/812f1f08-469d-44f4-907e-60ad61837364-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.603142 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "812f1f08-469d-44f4-907e-60ad61837364" (UID: "812f1f08-469d-44f4-907e-60ad61837364"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628397 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hmzbb"] Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628606 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2375197b-bcee-4713-841d-26bf583e7502" containerName="pull" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628619 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="2375197b-bcee-4713-841d-26bf583e7502" containerName="pull" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628626 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="kube-rbac-proxy-node" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628633 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="kube-rbac-proxy-node" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628643 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2375197b-bcee-4713-841d-26bf583e7502" containerName="util" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628649 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="2375197b-bcee-4713-841d-26bf583e7502" containerName="util" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628655 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovn-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628661 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovn-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628670 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovn-acl-logging" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628675 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovn-acl-logging" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628684 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628689 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628696 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="kubecfg-setup" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628701 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="kubecfg-setup" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628710 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628716 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628724 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="northd" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628730 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="northd" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628737 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628742 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628750 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628756 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628762 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2375197b-bcee-4713-841d-26bf583e7502" containerName="extract" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628767 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="2375197b-bcee-4713-841d-26bf583e7502" containerName="extract" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628774 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="nbdb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628781 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="nbdb" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628789 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628797 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.628805 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="sbdb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628811 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="sbdb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628891 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628901 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628909 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="nbdb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628918 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="2375197b-bcee-4713-841d-26bf583e7502" containerName="extract" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628924 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="sbdb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628933 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovn-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628939 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="kube-rbac-proxy-node" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628948 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628955 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="northd" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628964 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.628972 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovn-acl-logging" Feb 16 11:16:54 crc kubenswrapper[4797]: E0216 11:16:54.629079 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.629087 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.629177 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.629191 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="812f1f08-469d-44f4-907e-60ad61837364" containerName="ovnkube-controller" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.630594 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685156 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-run-ovn-kubernetes\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685215 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-run-ovn\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685239 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d127ddb0-f09b-47c2-b281-b465a0e78cf4-env-overrides\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685357 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-cni-bin\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685419 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-etc-openvswitch\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685494 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-run-systemd\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685517 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-node-log\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685555 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-run-openvswitch\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685594 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d127ddb0-f09b-47c2-b281-b465a0e78cf4-ovnkube-script-lib\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685626 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sccgc\" (UniqueName: \"kubernetes.io/projected/d127ddb0-f09b-47c2-b281-b465a0e78cf4-kube-api-access-sccgc\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685663 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-log-socket\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685709 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-run-netns\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685735 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-kubelet\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685755 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-systemd-units\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685778 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685823 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-slash\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685856 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-cni-netd\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685879 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d127ddb0-f09b-47c2-b281-b465a0e78cf4-ovn-node-metrics-cert\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685938 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d127ddb0-f09b-47c2-b281-b465a0e78cf4-ovnkube-config\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.685968 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-var-lib-openvswitch\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.686040 4797 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.686057 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv4sj\" (UniqueName: \"kubernetes.io/projected/812f1f08-469d-44f4-907e-60ad61837364-kube-api-access-mv4sj\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.686073 4797 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.686084 4797 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.686097 4797 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.686110 4797 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.686121 4797 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.686132 4797 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.686142 4797 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/812f1f08-469d-44f4-907e-60ad61837364-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.686153 4797 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/812f1f08-469d-44f4-907e-60ad61837364-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.686168 4797 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/812f1f08-469d-44f4-907e-60ad61837364-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.787332 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d127ddb0-f09b-47c2-b281-b465a0e78cf4-ovnkube-config\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.787614 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-var-lib-openvswitch\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.787700 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-run-ovn-kubernetes\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.787785 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-run-ovn\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.787855 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d127ddb0-f09b-47c2-b281-b465a0e78cf4-env-overrides\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.787800 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-run-ovn-kubernetes\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.787831 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-run-ovn\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.787926 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-cni-bin\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.787982 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d127ddb0-f09b-47c2-b281-b465a0e78cf4-ovnkube-config\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788000 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-etc-openvswitch\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788021 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-etc-openvswitch\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788077 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-run-systemd\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788095 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-node-log\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788128 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-run-openvswitch\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788147 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d127ddb0-f09b-47c2-b281-b465a0e78cf4-ovnkube-script-lib\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788180 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sccgc\" (UniqueName: \"kubernetes.io/projected/d127ddb0-f09b-47c2-b281-b465a0e78cf4-kube-api-access-sccgc\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788209 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-log-socket\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788242 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-run-netns\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788259 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788276 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-kubelet\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788289 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-systemd-units\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788322 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-slash\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788340 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-cni-netd\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788355 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d127ddb0-f09b-47c2-b281-b465a0e78cf4-ovn-node-metrics-cert\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788445 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d127ddb0-f09b-47c2-b281-b465a0e78cf4-env-overrides\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788479 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-log-socket\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788500 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-run-systemd\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788522 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-node-log\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788552 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-run-openvswitch\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788699 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-kubelet\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788755 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-run-netns\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788790 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788800 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-slash\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788841 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-systemd-units\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.787745 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-var-lib-openvswitch\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788823 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-cni-netd\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.788986 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d127ddb0-f09b-47c2-b281-b465a0e78cf4-ovnkube-script-lib\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.789280 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d127ddb0-f09b-47c2-b281-b465a0e78cf4-host-cni-bin\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.791646 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d127ddb0-f09b-47c2-b281-b465a0e78cf4-ovn-node-metrics-cert\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.814244 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sccgc\" (UniqueName: \"kubernetes.io/projected/d127ddb0-f09b-47c2-b281-b465a0e78cf4-kube-api-access-sccgc\") pod \"ovnkube-node-hmzbb\" (UID: \"d127ddb0-f09b-47c2-b281-b465a0e78cf4\") " pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:54 crc kubenswrapper[4797]: I0216 11:16:54.942694 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.267763 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovn-acl-logging/0.log" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.268332 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h9hsp_812f1f08-469d-44f4-907e-60ad61837364/ovn-controller/0.log" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.268766 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" event={"ID":"812f1f08-469d-44f4-907e-60ad61837364","Type":"ContainerDied","Data":"5b4e9a95230f56894a97e64e3f2a1083fbf3fc3c7debb2d6da8f8150dcc08672"} Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.268862 4797 scope.go:117] "RemoveContainer" containerID="9b639213eee10103d5dd443502e1ef8136381ee923d36ae9608b41bc0a1b2954" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.269016 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h9hsp" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.276703 4797 generic.go:334] "Generic (PLEG): container finished" podID="d127ddb0-f09b-47c2-b281-b465a0e78cf4" containerID="ca817d06a05de975b448f1c96bee66c43a29126f9d032632c498098e00545fc3" exitCode=0 Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.276779 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" event={"ID":"d127ddb0-f09b-47c2-b281-b465a0e78cf4","Type":"ContainerDied","Data":"ca817d06a05de975b448f1c96bee66c43a29126f9d032632c498098e00545fc3"} Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.276828 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" event={"ID":"d127ddb0-f09b-47c2-b281-b465a0e78cf4","Type":"ContainerStarted","Data":"e6e67bf12640595593fd00ac6c1dd7a9765befdb1fae355dc7bed479d0bd9d88"} Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.278743 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5qvbt_9532a098-7e41-454c-af48-44f9a9478d12/kube-multus/2.log" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.291084 4797 scope.go:117] "RemoveContainer" containerID="8596f8ce3b0db54be65bfde61f8808e8d0ed424672c54855d434042d473b4869" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.314192 4797 scope.go:117] "RemoveContainer" containerID="219fb35d2646068db4e483a14b90d9fdfd5483c0e11944e57a43bf14044b450a" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.351938 4797 scope.go:117] "RemoveContainer" containerID="02f857cf52a9244b7d109ca2d3490e3d5458317f4ccd47fb1d736c885d7723a7" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.393864 4797 scope.go:117] "RemoveContainer" containerID="cff3da2e5ae4cbda05af1a93da7e89528ee1806e8c3210f5b6404ba805e23d0e" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.417629 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h9hsp"] Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.422726 4797 scope.go:117] "RemoveContainer" containerID="d57df92ba2480e98db8c1b0a8947be31b71bb7bac7585269aaa32b898bc2a217" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.424211 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h9hsp"] Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.453544 4797 scope.go:117] "RemoveContainer" containerID="3db8e6c059354a0bc21f9bd3213bd07c8e12f201b2e45343f72532aac10c14be" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.485620 4797 scope.go:117] "RemoveContainer" containerID="2ea5745ce932db1630efe5da00d5868a2073e7cbcbf17701381dc508a109ce1f" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.502418 4797 scope.go:117] "RemoveContainer" containerID="8769b33d70973f667a88fd2ca98e20553e85de672eb28d88ed3448aa22dd2438" Feb 16 11:16:55 crc kubenswrapper[4797]: I0216 11:16:55.995175 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812f1f08-469d-44f4-907e-60ad61837364" path="/var/lib/kubelet/pods/812f1f08-469d-44f4-907e-60ad61837364/volumes" Feb 16 11:16:56 crc kubenswrapper[4797]: I0216 11:16:56.289925 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" event={"ID":"d127ddb0-f09b-47c2-b281-b465a0e78cf4","Type":"ContainerStarted","Data":"b407837ae51f18b37f901eb0be2b37771b25df64368b0bb2ac697a685e862950"} Feb 16 11:16:56 crc kubenswrapper[4797]: I0216 11:16:56.290216 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" event={"ID":"d127ddb0-f09b-47c2-b281-b465a0e78cf4","Type":"ContainerStarted","Data":"7fccc61bd63e54175480d628968597a7661fcf0b38f6e43c3bb442a8489bce1e"} Feb 16 11:16:56 crc kubenswrapper[4797]: I0216 11:16:56.290227 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" event={"ID":"d127ddb0-f09b-47c2-b281-b465a0e78cf4","Type":"ContainerStarted","Data":"3fa1bdbff2190703d3a2a8eada57e894323a3c718bffbd3a5084a3b1379f0249"} Feb 16 11:16:56 crc kubenswrapper[4797]: I0216 11:16:56.290236 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" event={"ID":"d127ddb0-f09b-47c2-b281-b465a0e78cf4","Type":"ContainerStarted","Data":"0eb7bbba91365aa722c5eefcb0d97975380f6042c5855e701cf46b11360d4f6f"} Feb 16 11:16:56 crc kubenswrapper[4797]: I0216 11:16:56.290245 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" event={"ID":"d127ddb0-f09b-47c2-b281-b465a0e78cf4","Type":"ContainerStarted","Data":"2eddc6b15774c18ab5a73b6fb64d5f9fd32473644616ad01a38dd4d011c9f244"} Feb 16 11:16:57 crc kubenswrapper[4797]: I0216 11:16:57.298051 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" event={"ID":"d127ddb0-f09b-47c2-b281-b465a0e78cf4","Type":"ContainerStarted","Data":"105128934c08aa4946412bd1d1a738dbc5c0bb568e64a5e05bdc329c213f793f"} Feb 16 11:16:59 crc kubenswrapper[4797]: I0216 11:16:59.312747 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" event={"ID":"d127ddb0-f09b-47c2-b281-b465a0e78cf4","Type":"ContainerStarted","Data":"d6a2cf896849354c80aa24aad331b19b879d010605177619ee8544b8743484b2"} Feb 16 11:16:59 crc kubenswrapper[4797]: I0216 11:16:59.943462 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc"] Feb 16 11:16:59 crc kubenswrapper[4797]: I0216 11:16:59.944478 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:16:59 crc kubenswrapper[4797]: I0216 11:16:59.947241 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-nmxjw" Feb 16 11:16:59 crc kubenswrapper[4797]: I0216 11:16:59.947798 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 11:16:59 crc kubenswrapper[4797]: I0216 11:16:59.947934 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.055244 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqc5h\" (UniqueName: \"kubernetes.io/projected/6635912a-ab64-4ee0-8b76-17ed2b17a7cd-kube-api-access-pqc5h\") pod \"obo-prometheus-operator-68bc856cb9-m4hxc\" (UID: \"6635912a-ab64-4ee0-8b76-17ed2b17a7cd\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.060101 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb"] Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.060751 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.062745 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-zfzvd" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.062745 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.071300 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6"] Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.072156 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.156274 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqc5h\" (UniqueName: \"kubernetes.io/projected/6635912a-ab64-4ee0-8b76-17ed2b17a7cd-kube-api-access-pqc5h\") pod \"obo-prometheus-operator-68bc856cb9-m4hxc\" (UID: \"6635912a-ab64-4ee0-8b76-17ed2b17a7cd\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.156356 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5d0ca59-5007-44d2-86e6-342c60bddb88-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb\" (UID: \"a5d0ca59-5007-44d2-86e6-342c60bddb88\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.156391 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5d0ca59-5007-44d2-86e6-342c60bddb88-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb\" (UID: \"a5d0ca59-5007-44d2-86e6-342c60bddb88\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.156437 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f30f63d1-0224-458f-9dcb-c5ca305c5a10-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-65f76847bb-259w6\" (UID: \"f30f63d1-0224-458f-9dcb-c5ca305c5a10\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.156510 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f30f63d1-0224-458f-9dcb-c5ca305c5a10-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-65f76847bb-259w6\" (UID: \"f30f63d1-0224-458f-9dcb-c5ca305c5a10\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.166950 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7vsxb"] Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.167817 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.170609 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.170780 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-ct4xx" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.184733 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqc5h\" (UniqueName: \"kubernetes.io/projected/6635912a-ab64-4ee0-8b76-17ed2b17a7cd-kube-api-access-pqc5h\") pod \"obo-prometheus-operator-68bc856cb9-m4hxc\" (UID: \"6635912a-ab64-4ee0-8b76-17ed2b17a7cd\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.257333 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bxcx\" (UniqueName: \"kubernetes.io/projected/c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8-kube-api-access-7bxcx\") pod \"observability-operator-59bdc8b94-7vsxb\" (UID: \"c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8\") " pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.257389 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5d0ca59-5007-44d2-86e6-342c60bddb88-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb\" (UID: \"a5d0ca59-5007-44d2-86e6-342c60bddb88\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.257411 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5d0ca59-5007-44d2-86e6-342c60bddb88-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb\" (UID: \"a5d0ca59-5007-44d2-86e6-342c60bddb88\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.257434 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f30f63d1-0224-458f-9dcb-c5ca305c5a10-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-65f76847bb-259w6\" (UID: \"f30f63d1-0224-458f-9dcb-c5ca305c5a10\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.257462 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7vsxb\" (UID: \"c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8\") " pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.257486 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f30f63d1-0224-458f-9dcb-c5ca305c5a10-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-65f76847bb-259w6\" (UID: \"f30f63d1-0224-458f-9dcb-c5ca305c5a10\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.259882 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.262877 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5d0ca59-5007-44d2-86e6-342c60bddb88-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb\" (UID: \"a5d0ca59-5007-44d2-86e6-342c60bddb88\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.264889 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f30f63d1-0224-458f-9dcb-c5ca305c5a10-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-65f76847bb-259w6\" (UID: \"f30f63d1-0224-458f-9dcb-c5ca305c5a10\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.270404 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f30f63d1-0224-458f-9dcb-c5ca305c5a10-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-65f76847bb-259w6\" (UID: \"f30f63d1-0224-458f-9dcb-c5ca305c5a10\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.273321 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-m98kn"] Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.274282 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5d0ca59-5007-44d2-86e6-342c60bddb88-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb\" (UID: \"a5d0ca59-5007-44d2-86e6-342c60bddb88\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.274447 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.280400 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-nlvpj" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.300970 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators_6635912a-ab64-4ee0-8b76-17ed2b17a7cd_0(9edbeea6e95da9df958e214a4b0aa2b1df712ccde52d3f1085d79fa005d30c41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.301091 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators_6635912a-ab64-4ee0-8b76-17ed2b17a7cd_0(9edbeea6e95da9df958e214a4b0aa2b1df712ccde52d3f1085d79fa005d30c41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.301119 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators_6635912a-ab64-4ee0-8b76-17ed2b17a7cd_0(9edbeea6e95da9df958e214a4b0aa2b1df712ccde52d3f1085d79fa005d30c41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.301190 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators(6635912a-ab64-4ee0-8b76-17ed2b17a7cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators(6635912a-ab64-4ee0-8b76-17ed2b17a7cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators_6635912a-ab64-4ee0-8b76-17ed2b17a7cd_0(9edbeea6e95da9df958e214a4b0aa2b1df712ccde52d3f1085d79fa005d30c41): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" podUID="6635912a-ab64-4ee0-8b76-17ed2b17a7cd" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.358295 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7vsxb\" (UID: \"c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8\") " pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.358380 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bxcx\" (UniqueName: \"kubernetes.io/projected/c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8-kube-api-access-7bxcx\") pod \"observability-operator-59bdc8b94-7vsxb\" (UID: \"c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8\") " pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.358407 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6001f2a-b067-4fe6-b250-bce9a306e7e6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-m98kn\" (UID: \"f6001f2a-b067-4fe6-b250-bce9a306e7e6\") " pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.358436 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmvvx\" (UniqueName: \"kubernetes.io/projected/f6001f2a-b067-4fe6-b250-bce9a306e7e6-kube-api-access-lmvvx\") pod \"perses-operator-5bf474d74f-m98kn\" (UID: \"f6001f2a-b067-4fe6-b250-bce9a306e7e6\") " pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.364091 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8-observability-operator-tls\") pod \"observability-operator-59bdc8b94-7vsxb\" (UID: \"c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8\") " pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.374373 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.376195 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bxcx\" (UniqueName: \"kubernetes.io/projected/c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8-kube-api-access-7bxcx\") pod \"observability-operator-59bdc8b94-7vsxb\" (UID: \"c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8\") " pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.385487 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.401620 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators_a5d0ca59-5007-44d2-86e6-342c60bddb88_0(4128a9593d088169f520ffc4aeacde2655712112ab2facd5946fbb3413dd1649): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.401690 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators_a5d0ca59-5007-44d2-86e6-342c60bddb88_0(4128a9593d088169f520ffc4aeacde2655712112ab2facd5946fbb3413dd1649): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.401723 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators_a5d0ca59-5007-44d2-86e6-342c60bddb88_0(4128a9593d088169f520ffc4aeacde2655712112ab2facd5946fbb3413dd1649): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.401775 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators(a5d0ca59-5007-44d2-86e6-342c60bddb88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators(a5d0ca59-5007-44d2-86e6-342c60bddb88)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators_a5d0ca59-5007-44d2-86e6-342c60bddb88_0(4128a9593d088169f520ffc4aeacde2655712112ab2facd5946fbb3413dd1649): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" podUID="a5d0ca59-5007-44d2-86e6-342c60bddb88" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.411787 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators_f30f63d1-0224-458f-9dcb-c5ca305c5a10_0(df57c28851af86f6ee13c9590c23c6fe0923b3ccdc0537a4b6ce5e2f6ce5455e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.411841 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators_f30f63d1-0224-458f-9dcb-c5ca305c5a10_0(df57c28851af86f6ee13c9590c23c6fe0923b3ccdc0537a4b6ce5e2f6ce5455e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.411863 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators_f30f63d1-0224-458f-9dcb-c5ca305c5a10_0(df57c28851af86f6ee13c9590c23c6fe0923b3ccdc0537a4b6ce5e2f6ce5455e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.411898 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators(f30f63d1-0224-458f-9dcb-c5ca305c5a10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators(f30f63d1-0224-458f-9dcb-c5ca305c5a10)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators_f30f63d1-0224-458f-9dcb-c5ca305c5a10_0(df57c28851af86f6ee13c9590c23c6fe0923b3ccdc0537a4b6ce5e2f6ce5455e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" podUID="f30f63d1-0224-458f-9dcb-c5ca305c5a10" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.459723 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6001f2a-b067-4fe6-b250-bce9a306e7e6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-m98kn\" (UID: \"f6001f2a-b067-4fe6-b250-bce9a306e7e6\") " pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.459778 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmvvx\" (UniqueName: \"kubernetes.io/projected/f6001f2a-b067-4fe6-b250-bce9a306e7e6-kube-api-access-lmvvx\") pod \"perses-operator-5bf474d74f-m98kn\" (UID: \"f6001f2a-b067-4fe6-b250-bce9a306e7e6\") " pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.460888 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/f6001f2a-b067-4fe6-b250-bce9a306e7e6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-m98kn\" (UID: \"f6001f2a-b067-4fe6-b250-bce9a306e7e6\") " pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.480344 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.484242 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmvvx\" (UniqueName: \"kubernetes.io/projected/f6001f2a-b067-4fe6-b250-bce9a306e7e6-kube-api-access-lmvvx\") pod \"perses-operator-5bf474d74f-m98kn\" (UID: \"f6001f2a-b067-4fe6-b250-bce9a306e7e6\") " pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.499918 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7vsxb_openshift-operators_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8_0(58baf675e378b56b65d96e5004c199b043b718eb4cc2a4e914447a696960c2b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.499997 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7vsxb_openshift-operators_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8_0(58baf675e378b56b65d96e5004c199b043b718eb4cc2a4e914447a696960c2b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.500024 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7vsxb_openshift-operators_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8_0(58baf675e378b56b65d96e5004c199b043b718eb4cc2a4e914447a696960c2b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.500104 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-7vsxb_openshift-operators(c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-7vsxb_openshift-operators(c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7vsxb_openshift-operators_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8_0(58baf675e378b56b65d96e5004c199b043b718eb4cc2a4e914447a696960c2b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" podUID="c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8" Feb 16 11:17:00 crc kubenswrapper[4797]: I0216 11:17:00.630767 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.646804 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-m98kn_openshift-operators_f6001f2a-b067-4fe6-b250-bce9a306e7e6_0(9939da4e1a89ffd0d35f6e75c950181bf06f8860377e224c09a7d59b120881f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.646884 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-m98kn_openshift-operators_f6001f2a-b067-4fe6-b250-bce9a306e7e6_0(9939da4e1a89ffd0d35f6e75c950181bf06f8860377e224c09a7d59b120881f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.646911 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-m98kn_openshift-operators_f6001f2a-b067-4fe6-b250-bce9a306e7e6_0(9939da4e1a89ffd0d35f6e75c950181bf06f8860377e224c09a7d59b120881f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:00 crc kubenswrapper[4797]: E0216 11:17:00.646981 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-m98kn_openshift-operators(f6001f2a-b067-4fe6-b250-bce9a306e7e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-m98kn_openshift-operators(f6001f2a-b067-4fe6-b250-bce9a306e7e6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-m98kn_openshift-operators_f6001f2a-b067-4fe6-b250-bce9a306e7e6_0(9939da4e1a89ffd0d35f6e75c950181bf06f8860377e224c09a7d59b120881f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" podUID="f6001f2a-b067-4fe6-b250-bce9a306e7e6" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.332016 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" event={"ID":"d127ddb0-f09b-47c2-b281-b465a0e78cf4","Type":"ContainerStarted","Data":"c6ab133038400a1d6c246eb6c4b5001d3f8ee9062fff53990be24b937c8e032d"} Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.332671 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.332698 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.367788 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.371664 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" podStartSLOduration=7.371646357 podStartE2EDuration="7.371646357s" podCreationTimestamp="2026-02-16 11:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:17:01.368549925 +0000 UTC m=+616.088734915" watchObservedRunningTime="2026-02-16 11:17:01.371646357 +0000 UTC m=+616.091831347" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.773262 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-m98kn"] Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.773390 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.773934 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.778646 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6"] Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.778804 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.779207 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.798538 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc"] Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.798638 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.799010 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.802914 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb"] Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.802990 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.803240 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.804264 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-m98kn_openshift-operators_f6001f2a-b067-4fe6-b250-bce9a306e7e6_0(4708697d64347243f839cbaef27c7109d65f500b5ca2f3298ef5cecc9622bacc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.804306 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-m98kn_openshift-operators_f6001f2a-b067-4fe6-b250-bce9a306e7e6_0(4708697d64347243f839cbaef27c7109d65f500b5ca2f3298ef5cecc9622bacc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.804326 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-m98kn_openshift-operators_f6001f2a-b067-4fe6-b250-bce9a306e7e6_0(4708697d64347243f839cbaef27c7109d65f500b5ca2f3298ef5cecc9622bacc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.804544 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-m98kn_openshift-operators(f6001f2a-b067-4fe6-b250-bce9a306e7e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-m98kn_openshift-operators(f6001f2a-b067-4fe6-b250-bce9a306e7e6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-m98kn_openshift-operators_f6001f2a-b067-4fe6-b250-bce9a306e7e6_0(4708697d64347243f839cbaef27c7109d65f500b5ca2f3298ef5cecc9622bacc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" podUID="f6001f2a-b067-4fe6-b250-bce9a306e7e6" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.809490 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators_f30f63d1-0224-458f-9dcb-c5ca305c5a10_0(7986c3716951e625d146e0c8a6559a280fec09fdbc9f9a0aa6d48a1a646d474a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.809534 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators_f30f63d1-0224-458f-9dcb-c5ca305c5a10_0(7986c3716951e625d146e0c8a6559a280fec09fdbc9f9a0aa6d48a1a646d474a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.809553 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators_f30f63d1-0224-458f-9dcb-c5ca305c5a10_0(7986c3716951e625d146e0c8a6559a280fec09fdbc9f9a0aa6d48a1a646d474a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.809606 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators(f30f63d1-0224-458f-9dcb-c5ca305c5a10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators(f30f63d1-0224-458f-9dcb-c5ca305c5a10)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators_f30f63d1-0224-458f-9dcb-c5ca305c5a10_0(7986c3716951e625d146e0c8a6559a280fec09fdbc9f9a0aa6d48a1a646d474a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" podUID="f30f63d1-0224-458f-9dcb-c5ca305c5a10" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.820538 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7vsxb"] Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.820671 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:01 crc kubenswrapper[4797]: I0216 11:17:01.821134 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.857112 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators_6635912a-ab64-4ee0-8b76-17ed2b17a7cd_0(6fa0e281f23d6775b06fa5bd8ad077dddacc7cb87bee92a2cf04ea95fc452227): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.857292 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators_6635912a-ab64-4ee0-8b76-17ed2b17a7cd_0(6fa0e281f23d6775b06fa5bd8ad077dddacc7cb87bee92a2cf04ea95fc452227): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.857317 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators_6635912a-ab64-4ee0-8b76-17ed2b17a7cd_0(6fa0e281f23d6775b06fa5bd8ad077dddacc7cb87bee92a2cf04ea95fc452227): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.857368 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators(6635912a-ab64-4ee0-8b76-17ed2b17a7cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators(6635912a-ab64-4ee0-8b76-17ed2b17a7cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators_6635912a-ab64-4ee0-8b76-17ed2b17a7cd_0(6fa0e281f23d6775b06fa5bd8ad077dddacc7cb87bee92a2cf04ea95fc452227): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" podUID="6635912a-ab64-4ee0-8b76-17ed2b17a7cd" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.865045 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7vsxb_openshift-operators_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8_0(f46929e4eee3f015ea815d0ae6aad5fcb049dadc082abb54ff0655658c9e5883): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.865111 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7vsxb_openshift-operators_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8_0(f46929e4eee3f015ea815d0ae6aad5fcb049dadc082abb54ff0655658c9e5883): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.865130 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7vsxb_openshift-operators_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8_0(f46929e4eee3f015ea815d0ae6aad5fcb049dadc082abb54ff0655658c9e5883): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.865168 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-7vsxb_openshift-operators(c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-7vsxb_openshift-operators(c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7vsxb_openshift-operators_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8_0(f46929e4eee3f015ea815d0ae6aad5fcb049dadc082abb54ff0655658c9e5883): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" podUID="c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.867241 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators_a5d0ca59-5007-44d2-86e6-342c60bddb88_0(48481513ad5b1829779ba005a078785b58019a6b6f66eb5ac2b19763c2c8e80e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.867284 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators_a5d0ca59-5007-44d2-86e6-342c60bddb88_0(48481513ad5b1829779ba005a078785b58019a6b6f66eb5ac2b19763c2c8e80e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.867304 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators_a5d0ca59-5007-44d2-86e6-342c60bddb88_0(48481513ad5b1829779ba005a078785b58019a6b6f66eb5ac2b19763c2c8e80e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:01 crc kubenswrapper[4797]: E0216 11:17:01.867336 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators(a5d0ca59-5007-44d2-86e6-342c60bddb88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators(a5d0ca59-5007-44d2-86e6-342c60bddb88)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators_a5d0ca59-5007-44d2-86e6-342c60bddb88_0(48481513ad5b1829779ba005a078785b58019a6b6f66eb5ac2b19763c2c8e80e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" podUID="a5d0ca59-5007-44d2-86e6-342c60bddb88" Feb 16 11:17:02 crc kubenswrapper[4797]: I0216 11:17:02.338503 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:17:02 crc kubenswrapper[4797]: I0216 11:17:02.383161 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:17:08 crc kubenswrapper[4797]: I0216 11:17:08.982548 4797 scope.go:117] "RemoveContainer" containerID="75bb8a48b7bbd354f63efb913901a9ba447a87a652655d54697b2c03365b4699" Feb 16 11:17:08 crc kubenswrapper[4797]: E0216 11:17:08.983863 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-5qvbt_openshift-multus(9532a098-7e41-454c-af48-44f9a9478d12)\"" pod="openshift-multus/multus-5qvbt" podUID="9532a098-7e41-454c-af48-44f9a9478d12" Feb 16 11:17:12 crc kubenswrapper[4797]: I0216 11:17:12.982098 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:12 crc kubenswrapper[4797]: I0216 11:17:12.982943 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:13 crc kubenswrapper[4797]: E0216 11:17:13.022354 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators_f30f63d1-0224-458f-9dcb-c5ca305c5a10_0(46e046814d1b254bb3aede6fdbe52eef7bea25880081ba2706ca0db1ab567982): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:13 crc kubenswrapper[4797]: E0216 11:17:13.022495 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators_f30f63d1-0224-458f-9dcb-c5ca305c5a10_0(46e046814d1b254bb3aede6fdbe52eef7bea25880081ba2706ca0db1ab567982): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:13 crc kubenswrapper[4797]: E0216 11:17:13.022629 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators_f30f63d1-0224-458f-9dcb-c5ca305c5a10_0(46e046814d1b254bb3aede6fdbe52eef7bea25880081ba2706ca0db1ab567982): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:13 crc kubenswrapper[4797]: E0216 11:17:13.022764 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators(f30f63d1-0224-458f-9dcb-c5ca305c5a10)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators(f30f63d1-0224-458f-9dcb-c5ca305c5a10)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_openshift-operators_f30f63d1-0224-458f-9dcb-c5ca305c5a10_0(46e046814d1b254bb3aede6fdbe52eef7bea25880081ba2706ca0db1ab567982): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" podUID="f30f63d1-0224-458f-9dcb-c5ca305c5a10" Feb 16 11:17:14 crc kubenswrapper[4797]: I0216 11:17:14.982120 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:14 crc kubenswrapper[4797]: I0216 11:17:14.982161 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:14 crc kubenswrapper[4797]: I0216 11:17:14.984014 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:14 crc kubenswrapper[4797]: I0216 11:17:14.984202 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:15 crc kubenswrapper[4797]: E0216 11:17:15.022486 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators_a5d0ca59-5007-44d2-86e6-342c60bddb88_0(208380af07e6196f1a7bae217325c6bee90e4a8daf44133bc0760e4fe5f4bdec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:15 crc kubenswrapper[4797]: E0216 11:17:15.022555 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators_a5d0ca59-5007-44d2-86e6-342c60bddb88_0(208380af07e6196f1a7bae217325c6bee90e4a8daf44133bc0760e4fe5f4bdec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:15 crc kubenswrapper[4797]: E0216 11:17:15.022601 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators_a5d0ca59-5007-44d2-86e6-342c60bddb88_0(208380af07e6196f1a7bae217325c6bee90e4a8daf44133bc0760e4fe5f4bdec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:15 crc kubenswrapper[4797]: E0216 11:17:15.022661 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators(a5d0ca59-5007-44d2-86e6-342c60bddb88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators(a5d0ca59-5007-44d2-86e6-342c60bddb88)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_openshift-operators_a5d0ca59-5007-44d2-86e6-342c60bddb88_0(208380af07e6196f1a7bae217325c6bee90e4a8daf44133bc0760e4fe5f4bdec): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" podUID="a5d0ca59-5007-44d2-86e6-342c60bddb88" Feb 16 11:17:15 crc kubenswrapper[4797]: E0216 11:17:15.028665 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-m98kn_openshift-operators_f6001f2a-b067-4fe6-b250-bce9a306e7e6_0(50f3575a84a4e6a9f2a4a730e2627dc288df3d0ed50d2e979d8f367e93428a21): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:15 crc kubenswrapper[4797]: E0216 11:17:15.028765 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-m98kn_openshift-operators_f6001f2a-b067-4fe6-b250-bce9a306e7e6_0(50f3575a84a4e6a9f2a4a730e2627dc288df3d0ed50d2e979d8f367e93428a21): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:15 crc kubenswrapper[4797]: E0216 11:17:15.028801 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-m98kn_openshift-operators_f6001f2a-b067-4fe6-b250-bce9a306e7e6_0(50f3575a84a4e6a9f2a4a730e2627dc288df3d0ed50d2e979d8f367e93428a21): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:15 crc kubenswrapper[4797]: E0216 11:17:15.028876 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-m98kn_openshift-operators(f6001f2a-b067-4fe6-b250-bce9a306e7e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-m98kn_openshift-operators(f6001f2a-b067-4fe6-b250-bce9a306e7e6)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-m98kn_openshift-operators_f6001f2a-b067-4fe6-b250-bce9a306e7e6_0(50f3575a84a4e6a9f2a4a730e2627dc288df3d0ed50d2e979d8f367e93428a21): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" podUID="f6001f2a-b067-4fe6-b250-bce9a306e7e6" Feb 16 11:17:16 crc kubenswrapper[4797]: I0216 11:17:16.982149 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:16 crc kubenswrapper[4797]: I0216 11:17:16.982178 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:16 crc kubenswrapper[4797]: I0216 11:17:16.983019 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:16 crc kubenswrapper[4797]: I0216 11:17:16.983027 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:17 crc kubenswrapper[4797]: E0216 11:17:17.015756 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators_6635912a-ab64-4ee0-8b76-17ed2b17a7cd_0(a65d20d25f229072b9e1b6c978681e65108b89e72ea3d8c153aca24c3329d558): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:17 crc kubenswrapper[4797]: E0216 11:17:17.015827 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators_6635912a-ab64-4ee0-8b76-17ed2b17a7cd_0(a65d20d25f229072b9e1b6c978681e65108b89e72ea3d8c153aca24c3329d558): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:17 crc kubenswrapper[4797]: E0216 11:17:17.015850 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators_6635912a-ab64-4ee0-8b76-17ed2b17a7cd_0(a65d20d25f229072b9e1b6c978681e65108b89e72ea3d8c153aca24c3329d558): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:17 crc kubenswrapper[4797]: E0216 11:17:17.015903 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators(6635912a-ab64-4ee0-8b76-17ed2b17a7cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators(6635912a-ab64-4ee0-8b76-17ed2b17a7cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-m4hxc_openshift-operators_6635912a-ab64-4ee0-8b76-17ed2b17a7cd_0(a65d20d25f229072b9e1b6c978681e65108b89e72ea3d8c153aca24c3329d558): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" podUID="6635912a-ab64-4ee0-8b76-17ed2b17a7cd" Feb 16 11:17:17 crc kubenswrapper[4797]: E0216 11:17:17.024766 4797 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7vsxb_openshift-operators_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8_0(03ae1fcb940c0dbe103de0168366ff302f3c750f261c0c9698b639e6c090fb3d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 11:17:17 crc kubenswrapper[4797]: E0216 11:17:17.024835 4797 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7vsxb_openshift-operators_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8_0(03ae1fcb940c0dbe103de0168366ff302f3c750f261c0c9698b639e6c090fb3d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:17 crc kubenswrapper[4797]: E0216 11:17:17.024861 4797 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7vsxb_openshift-operators_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8_0(03ae1fcb940c0dbe103de0168366ff302f3c750f261c0c9698b639e6c090fb3d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:17 crc kubenswrapper[4797]: E0216 11:17:17.024914 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-7vsxb_openshift-operators(c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-7vsxb_openshift-operators(c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-7vsxb_openshift-operators_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8_0(03ae1fcb940c0dbe103de0168366ff302f3c750f261c0c9698b639e6c090fb3d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" podUID="c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8" Feb 16 11:17:22 crc kubenswrapper[4797]: I0216 11:17:22.983422 4797 scope.go:117] "RemoveContainer" containerID="75bb8a48b7bbd354f63efb913901a9ba447a87a652655d54697b2c03365b4699" Feb 16 11:17:23 crc kubenswrapper[4797]: I0216 11:17:23.683619 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5qvbt_9532a098-7e41-454c-af48-44f9a9478d12/kube-multus/2.log" Feb 16 11:17:23 crc kubenswrapper[4797]: I0216 11:17:23.684088 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5qvbt" event={"ID":"9532a098-7e41-454c-af48-44f9a9478d12","Type":"ContainerStarted","Data":"d5671801d545fb3197f4749e5d8982b8522d6265e5e00a3cfc076c14ff814873"} Feb 16 11:17:24 crc kubenswrapper[4797]: I0216 11:17:24.964801 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hmzbb" Feb 16 11:17:25 crc kubenswrapper[4797]: I0216 11:17:25.985356 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:25 crc kubenswrapper[4797]: I0216 11:17:25.985999 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" Feb 16 11:17:26 crc kubenswrapper[4797]: I0216 11:17:26.426614 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6"] Feb 16 11:17:26 crc kubenswrapper[4797]: I0216 11:17:26.701538 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" event={"ID":"f30f63d1-0224-458f-9dcb-c5ca305c5a10","Type":"ContainerStarted","Data":"137cb5d9c9e10515f55e80d696ca24fdfd6ea6792fa6438eeca3a6e8e6cb702d"} Feb 16 11:17:28 crc kubenswrapper[4797]: I0216 11:17:28.982307 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:28 crc kubenswrapper[4797]: I0216 11:17:28.982871 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:28 crc kubenswrapper[4797]: I0216 11:17:28.982980 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:28 crc kubenswrapper[4797]: I0216 11:17:28.983451 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:29 crc kubenswrapper[4797]: I0216 11:17:29.981820 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:29 crc kubenswrapper[4797]: I0216 11:17:29.982247 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" Feb 16 11:17:30 crc kubenswrapper[4797]: I0216 11:17:30.451545 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-m98kn"] Feb 16 11:17:30 crc kubenswrapper[4797]: I0216 11:17:30.515940 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb"] Feb 16 11:17:30 crc kubenswrapper[4797]: W0216 11:17:30.518284 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5d0ca59_5007_44d2_86e6_342c60bddb88.slice/crio-268a559cb08d765a5842d8fe5be272cfcb407282c850f4b2f25e82299f997f70 WatchSource:0}: Error finding container 268a559cb08d765a5842d8fe5be272cfcb407282c850f4b2f25e82299f997f70: Status 404 returned error can't find the container with id 268a559cb08d765a5842d8fe5be272cfcb407282c850f4b2f25e82299f997f70 Feb 16 11:17:30 crc kubenswrapper[4797]: I0216 11:17:30.641453 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-7vsxb"] Feb 16 11:17:30 crc kubenswrapper[4797]: W0216 11:17:30.647011 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6ebc66d_6ff0_4bbf_9578_b4d5a1dce1b8.slice/crio-767a12a92fe31779456ad221f02f4bb191889b7972917bd6d903bfdde5eb903d WatchSource:0}: Error finding container 767a12a92fe31779456ad221f02f4bb191889b7972917bd6d903bfdde5eb903d: Status 404 returned error can't find the container with id 767a12a92fe31779456ad221f02f4bb191889b7972917bd6d903bfdde5eb903d Feb 16 11:17:30 crc kubenswrapper[4797]: I0216 11:17:30.728120 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" event={"ID":"f6001f2a-b067-4fe6-b250-bce9a306e7e6","Type":"ContainerStarted","Data":"0688c197f318b6d02c506cf8481b82701efb91a39f6df21a5fb054c752939347"} Feb 16 11:17:30 crc kubenswrapper[4797]: I0216 11:17:30.729516 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" event={"ID":"c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8","Type":"ContainerStarted","Data":"767a12a92fe31779456ad221f02f4bb191889b7972917bd6d903bfdde5eb903d"} Feb 16 11:17:30 crc kubenswrapper[4797]: I0216 11:17:30.731432 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" event={"ID":"f30f63d1-0224-458f-9dcb-c5ca305c5a10","Type":"ContainerStarted","Data":"ed38037a27bb42dfea1b1933b6d2db31b5a6c45a707c4921985eff02be44a6aa"} Feb 16 11:17:30 crc kubenswrapper[4797]: I0216 11:17:30.733161 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" event={"ID":"a5d0ca59-5007-44d2-86e6-342c60bddb88","Type":"ContainerStarted","Data":"37b3b889f688a35751f5b02e1f3b19ed5dd580b5eaf81f34605ed0cefb03419f"} Feb 16 11:17:30 crc kubenswrapper[4797]: I0216 11:17:30.733202 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" event={"ID":"a5d0ca59-5007-44d2-86e6-342c60bddb88","Type":"ContainerStarted","Data":"268a559cb08d765a5842d8fe5be272cfcb407282c850f4b2f25e82299f997f70"} Feb 16 11:17:30 crc kubenswrapper[4797]: I0216 11:17:30.750805 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-259w6" podStartSLOduration=26.844633347 podStartE2EDuration="30.750763318s" podCreationTimestamp="2026-02-16 11:17:00 +0000 UTC" firstStartedPulling="2026-02-16 11:17:26.437853578 +0000 UTC m=+641.158038558" lastFinishedPulling="2026-02-16 11:17:30.343983549 +0000 UTC m=+645.064168529" observedRunningTime="2026-02-16 11:17:30.748136919 +0000 UTC m=+645.468321909" watchObservedRunningTime="2026-02-16 11:17:30.750763318 +0000 UTC m=+645.470948338" Feb 16 11:17:30 crc kubenswrapper[4797]: I0216 11:17:30.779397 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb" podStartSLOduration=30.779380003 podStartE2EDuration="30.779380003s" podCreationTimestamp="2026-02-16 11:17:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:17:30.775229963 +0000 UTC m=+645.495414973" watchObservedRunningTime="2026-02-16 11:17:30.779380003 +0000 UTC m=+645.499564993" Feb 16 11:17:31 crc kubenswrapper[4797]: I0216 11:17:31.982295 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:31 crc kubenswrapper[4797]: I0216 11:17:31.982813 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" Feb 16 11:17:32 crc kubenswrapper[4797]: I0216 11:17:32.180900 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc"] Feb 16 11:17:32 crc kubenswrapper[4797]: W0216 11:17:32.198005 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6635912a_ab64_4ee0_8b76_17ed2b17a7cd.slice/crio-fb1cc653eb6b2e29f99840a64e8e8159acbdb0d57d3c165e68c709a2a3203e64 WatchSource:0}: Error finding container fb1cc653eb6b2e29f99840a64e8e8159acbdb0d57d3c165e68c709a2a3203e64: Status 404 returned error can't find the container with id fb1cc653eb6b2e29f99840a64e8e8159acbdb0d57d3c165e68c709a2a3203e64 Feb 16 11:17:32 crc kubenswrapper[4797]: I0216 11:17:32.748361 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" event={"ID":"6635912a-ab64-4ee0-8b76-17ed2b17a7cd","Type":"ContainerStarted","Data":"fb1cc653eb6b2e29f99840a64e8e8159acbdb0d57d3c165e68c709a2a3203e64"} Feb 16 11:17:33 crc kubenswrapper[4797]: I0216 11:17:33.758333 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" event={"ID":"f6001f2a-b067-4fe6-b250-bce9a306e7e6","Type":"ContainerStarted","Data":"86d287a526acd9ca29b22f115c1faa2aea4ea108ea808178220a4fe3ff32d967"} Feb 16 11:17:33 crc kubenswrapper[4797]: I0216 11:17:33.758757 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:33 crc kubenswrapper[4797]: I0216 11:17:33.778105 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" podStartSLOduration=31.398793054 podStartE2EDuration="33.778082891s" podCreationTimestamp="2026-02-16 11:17:00 +0000 UTC" firstStartedPulling="2026-02-16 11:17:30.474928016 +0000 UTC m=+645.195112996" lastFinishedPulling="2026-02-16 11:17:32.854217853 +0000 UTC m=+647.574402833" observedRunningTime="2026-02-16 11:17:33.774853236 +0000 UTC m=+648.495038216" watchObservedRunningTime="2026-02-16 11:17:33.778082891 +0000 UTC m=+648.498267901" Feb 16 11:17:37 crc kubenswrapper[4797]: I0216 11:17:37.782599 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" event={"ID":"c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8","Type":"ContainerStarted","Data":"346b6e7bfcbfc0b7e2ae085d6ed96b20534fa9fd61553f62a1a461d937c77963"} Feb 16 11:17:37 crc kubenswrapper[4797]: I0216 11:17:37.783141 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:37 crc kubenswrapper[4797]: I0216 11:17:37.785704 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" event={"ID":"6635912a-ab64-4ee0-8b76-17ed2b17a7cd","Type":"ContainerStarted","Data":"baefa14c74e2a882c8eaed96cef7e7a306d08b4da2b1d99a27075c7ed177d32b"} Feb 16 11:17:37 crc kubenswrapper[4797]: I0216 11:17:37.786771 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" Feb 16 11:17:37 crc kubenswrapper[4797]: I0216 11:17:37.808559 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-7vsxb" podStartSLOduration=31.819503639 podStartE2EDuration="37.808536143s" podCreationTimestamp="2026-02-16 11:17:00 +0000 UTC" firstStartedPulling="2026-02-16 11:17:30.649986608 +0000 UTC m=+645.370171588" lastFinishedPulling="2026-02-16 11:17:36.639019112 +0000 UTC m=+651.359204092" observedRunningTime="2026-02-16 11:17:37.804158054 +0000 UTC m=+652.524343044" watchObservedRunningTime="2026-02-16 11:17:37.808536143 +0000 UTC m=+652.528721133" Feb 16 11:17:37 crc kubenswrapper[4797]: I0216 11:17:37.826470 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-m4hxc" podStartSLOduration=34.406528314 podStartE2EDuration="38.826448167s" podCreationTimestamp="2026-02-16 11:16:59 +0000 UTC" firstStartedPulling="2026-02-16 11:17:32.199705156 +0000 UTC m=+646.919890126" lastFinishedPulling="2026-02-16 11:17:36.619624999 +0000 UTC m=+651.339809979" observedRunningTime="2026-02-16 11:17:37.825216724 +0000 UTC m=+652.545401724" watchObservedRunningTime="2026-02-16 11:17:37.826448167 +0000 UTC m=+652.546633147" Feb 16 11:17:40 crc kubenswrapper[4797]: I0216 11:17:40.636504 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-m98kn" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.297927 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-48xwg"] Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.298859 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-48xwg" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.303785 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.310981 4797 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-th8tk" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.314601 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.316234 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-7p57w"] Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.317278 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7p57w" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.319571 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-48xwg"] Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.322233 4797 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-lvt4g" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.331156 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7p57w"] Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.352292 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-85h5w"] Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.354093 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-85h5w" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.358152 4797 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6mq5d" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.370565 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-85h5w"] Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.419026 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47sdx\" (UniqueName: \"kubernetes.io/projected/d897848c-20a8-4efe-b2a0-60d5349f5cc0-kube-api-access-47sdx\") pod \"cert-manager-858654f9db-7p57w\" (UID: \"d897848c-20a8-4efe-b2a0-60d5349f5cc0\") " pod="cert-manager/cert-manager-858654f9db-7p57w" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.419102 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74cqm\" (UniqueName: \"kubernetes.io/projected/d63e62f5-bb87-459b-b430-fe6dccef3dd7-kube-api-access-74cqm\") pod \"cert-manager-cainjector-cf98fcc89-48xwg\" (UID: \"d63e62f5-bb87-459b-b430-fe6dccef3dd7\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-48xwg" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.520834 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47sdx\" (UniqueName: \"kubernetes.io/projected/d897848c-20a8-4efe-b2a0-60d5349f5cc0-kube-api-access-47sdx\") pod \"cert-manager-858654f9db-7p57w\" (UID: \"d897848c-20a8-4efe-b2a0-60d5349f5cc0\") " pod="cert-manager/cert-manager-858654f9db-7p57w" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.520905 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k65cb\" (UniqueName: \"kubernetes.io/projected/ea140fff-b2af-47e6-beb4-3edc6c997e62-kube-api-access-k65cb\") pod \"cert-manager-webhook-687f57d79b-85h5w\" (UID: \"ea140fff-b2af-47e6-beb4-3edc6c997e62\") " pod="cert-manager/cert-manager-webhook-687f57d79b-85h5w" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.520931 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74cqm\" (UniqueName: \"kubernetes.io/projected/d63e62f5-bb87-459b-b430-fe6dccef3dd7-kube-api-access-74cqm\") pod \"cert-manager-cainjector-cf98fcc89-48xwg\" (UID: \"d63e62f5-bb87-459b-b430-fe6dccef3dd7\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-48xwg" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.538343 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47sdx\" (UniqueName: \"kubernetes.io/projected/d897848c-20a8-4efe-b2a0-60d5349f5cc0-kube-api-access-47sdx\") pod \"cert-manager-858654f9db-7p57w\" (UID: \"d897848c-20a8-4efe-b2a0-60d5349f5cc0\") " pod="cert-manager/cert-manager-858654f9db-7p57w" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.542160 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74cqm\" (UniqueName: \"kubernetes.io/projected/d63e62f5-bb87-459b-b430-fe6dccef3dd7-kube-api-access-74cqm\") pod \"cert-manager-cainjector-cf98fcc89-48xwg\" (UID: \"d63e62f5-bb87-459b-b430-fe6dccef3dd7\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-48xwg" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.621566 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k65cb\" (UniqueName: \"kubernetes.io/projected/ea140fff-b2af-47e6-beb4-3edc6c997e62-kube-api-access-k65cb\") pod \"cert-manager-webhook-687f57d79b-85h5w\" (UID: \"ea140fff-b2af-47e6-beb4-3edc6c997e62\") " pod="cert-manager/cert-manager-webhook-687f57d79b-85h5w" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.621973 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-48xwg" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.640850 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-7p57w" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.642119 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k65cb\" (UniqueName: \"kubernetes.io/projected/ea140fff-b2af-47e6-beb4-3edc6c997e62-kube-api-access-k65cb\") pod \"cert-manager-webhook-687f57d79b-85h5w\" (UID: \"ea140fff-b2af-47e6-beb4-3edc6c997e62\") " pod="cert-manager/cert-manager-webhook-687f57d79b-85h5w" Feb 16 11:17:46 crc kubenswrapper[4797]: I0216 11:17:46.675030 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-85h5w" Feb 16 11:17:47 crc kubenswrapper[4797]: I0216 11:17:47.031723 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-85h5w"] Feb 16 11:17:47 crc kubenswrapper[4797]: W0216 11:17:47.035350 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea140fff_b2af_47e6_beb4_3edc6c997e62.slice/crio-cf766c75a0a8d33cbbf1c349f42215141b186a0ef7ba99e08b9317fcedddee8a WatchSource:0}: Error finding container cf766c75a0a8d33cbbf1c349f42215141b186a0ef7ba99e08b9317fcedddee8a: Status 404 returned error can't find the container with id cf766c75a0a8d33cbbf1c349f42215141b186a0ef7ba99e08b9317fcedddee8a Feb 16 11:17:47 crc kubenswrapper[4797]: I0216 11:17:47.079301 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-7p57w"] Feb 16 11:17:47 crc kubenswrapper[4797]: W0216 11:17:47.082478 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd897848c_20a8_4efe_b2a0_60d5349f5cc0.slice/crio-e6b7769fbdfc6264444fe741f1e9eca5a1c155c018dd8e5550ba6d336251a2df WatchSource:0}: Error finding container e6b7769fbdfc6264444fe741f1e9eca5a1c155c018dd8e5550ba6d336251a2df: Status 404 returned error can't find the container with id e6b7769fbdfc6264444fe741f1e9eca5a1c155c018dd8e5550ba6d336251a2df Feb 16 11:17:47 crc kubenswrapper[4797]: I0216 11:17:47.093420 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-48xwg"] Feb 16 11:17:47 crc kubenswrapper[4797]: W0216 11:17:47.099791 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd63e62f5_bb87_459b_b430_fe6dccef3dd7.slice/crio-a5ca7ea2255f8faa3df2e83c0ad6d110dcb29a08ce64087587a7c5d424f21ad3 WatchSource:0}: Error finding container a5ca7ea2255f8faa3df2e83c0ad6d110dcb29a08ce64087587a7c5d424f21ad3: Status 404 returned error can't find the container with id a5ca7ea2255f8faa3df2e83c0ad6d110dcb29a08ce64087587a7c5d424f21ad3 Feb 16 11:17:47 crc kubenswrapper[4797]: I0216 11:17:47.838556 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-48xwg" event={"ID":"d63e62f5-bb87-459b-b430-fe6dccef3dd7","Type":"ContainerStarted","Data":"a5ca7ea2255f8faa3df2e83c0ad6d110dcb29a08ce64087587a7c5d424f21ad3"} Feb 16 11:17:47 crc kubenswrapper[4797]: I0216 11:17:47.839946 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7p57w" event={"ID":"d897848c-20a8-4efe-b2a0-60d5349f5cc0","Type":"ContainerStarted","Data":"e6b7769fbdfc6264444fe741f1e9eca5a1c155c018dd8e5550ba6d336251a2df"} Feb 16 11:17:47 crc kubenswrapper[4797]: I0216 11:17:47.841480 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-85h5w" event={"ID":"ea140fff-b2af-47e6-beb4-3edc6c997e62","Type":"ContainerStarted","Data":"cf766c75a0a8d33cbbf1c349f42215141b186a0ef7ba99e08b9317fcedddee8a"} Feb 16 11:17:50 crc kubenswrapper[4797]: I0216 11:17:50.860291 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-7p57w" event={"ID":"d897848c-20a8-4efe-b2a0-60d5349f5cc0","Type":"ContainerStarted","Data":"ecaa65036a0abd13148a8afd4424dd02f4ab46b44348b1c23991de32c6d699ed"} Feb 16 11:17:50 crc kubenswrapper[4797]: I0216 11:17:50.862451 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-48xwg" event={"ID":"d63e62f5-bb87-459b-b430-fe6dccef3dd7","Type":"ContainerStarted","Data":"9e65d80bff5d9cf98459f3cf0a027cc487de8e9903d7148c40c976e6b2c0b59a"} Feb 16 11:17:50 crc kubenswrapper[4797]: I0216 11:17:50.864707 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-85h5w" event={"ID":"ea140fff-b2af-47e6-beb4-3edc6c997e62","Type":"ContainerStarted","Data":"e5bb4edb66b901a7fe53ef62427f55a873cbd5e8ca68718f41a74ff6676a3516"} Feb 16 11:17:50 crc kubenswrapper[4797]: I0216 11:17:50.864866 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-85h5w" Feb 16 11:17:50 crc kubenswrapper[4797]: I0216 11:17:50.897043 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-7p57w" podStartSLOduration=1.383908031 podStartE2EDuration="4.897019505s" podCreationTimestamp="2026-02-16 11:17:46 +0000 UTC" firstStartedPulling="2026-02-16 11:17:47.0846656 +0000 UTC m=+661.804850580" lastFinishedPulling="2026-02-16 11:17:50.597777074 +0000 UTC m=+665.317962054" observedRunningTime="2026-02-16 11:17:50.886481809 +0000 UTC m=+665.606666789" watchObservedRunningTime="2026-02-16 11:17:50.897019505 +0000 UTC m=+665.617204485" Feb 16 11:17:50 crc kubenswrapper[4797]: I0216 11:17:50.933244 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-85h5w" podStartSLOduration=1.4097322 podStartE2EDuration="4.933227004s" podCreationTimestamp="2026-02-16 11:17:46 +0000 UTC" firstStartedPulling="2026-02-16 11:17:47.036974321 +0000 UTC m=+661.757159301" lastFinishedPulling="2026-02-16 11:17:50.560469125 +0000 UTC m=+665.280654105" observedRunningTime="2026-02-16 11:17:50.931993321 +0000 UTC m=+665.652178301" watchObservedRunningTime="2026-02-16 11:17:50.933227004 +0000 UTC m=+665.653411984" Feb 16 11:17:50 crc kubenswrapper[4797]: I0216 11:17:50.949212 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-48xwg" podStartSLOduration=1.449169966 podStartE2EDuration="4.949192226s" podCreationTimestamp="2026-02-16 11:17:46 +0000 UTC" firstStartedPulling="2026-02-16 11:17:47.101807204 +0000 UTC m=+661.821992174" lastFinishedPulling="2026-02-16 11:17:50.601829454 +0000 UTC m=+665.322014434" observedRunningTime="2026-02-16 11:17:50.946827002 +0000 UTC m=+665.667011992" watchObservedRunningTime="2026-02-16 11:17:50.949192226 +0000 UTC m=+665.669377206" Feb 16 11:17:56 crc kubenswrapper[4797]: I0216 11:17:56.679762 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-85h5w" Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.210780 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp"] Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.214830 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.217927 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.226557 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp"] Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.268667 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/adde08ba-a127-4bfe-87fa-af192ad0a1de-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp\" (UID: \"adde08ba-a127-4bfe-87fa-af192ad0a1de\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.269006 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/adde08ba-a127-4bfe-87fa-af192ad0a1de-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp\" (UID: \"adde08ba-a127-4bfe-87fa-af192ad0a1de\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.269114 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgpld\" (UniqueName: \"kubernetes.io/projected/adde08ba-a127-4bfe-87fa-af192ad0a1de-kube-api-access-qgpld\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp\" (UID: \"adde08ba-a127-4bfe-87fa-af192ad0a1de\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.370727 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/adde08ba-a127-4bfe-87fa-af192ad0a1de-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp\" (UID: \"adde08ba-a127-4bfe-87fa-af192ad0a1de\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.370781 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgpld\" (UniqueName: \"kubernetes.io/projected/adde08ba-a127-4bfe-87fa-af192ad0a1de-kube-api-access-qgpld\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp\" (UID: \"adde08ba-a127-4bfe-87fa-af192ad0a1de\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.370822 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/adde08ba-a127-4bfe-87fa-af192ad0a1de-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp\" (UID: \"adde08ba-a127-4bfe-87fa-af192ad0a1de\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.371390 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/adde08ba-a127-4bfe-87fa-af192ad0a1de-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp\" (UID: \"adde08ba-a127-4bfe-87fa-af192ad0a1de\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.371418 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/adde08ba-a127-4bfe-87fa-af192ad0a1de-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp\" (UID: \"adde08ba-a127-4bfe-87fa-af192ad0a1de\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.398304 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgpld\" (UniqueName: \"kubernetes.io/projected/adde08ba-a127-4bfe-87fa-af192ad0a1de-kube-api-access-qgpld\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp\" (UID: \"adde08ba-a127-4bfe-87fa-af192ad0a1de\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.536284 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" Feb 16 11:18:19 crc kubenswrapper[4797]: I0216 11:18:19.782481 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp"] Feb 16 11:18:20 crc kubenswrapper[4797]: I0216 11:18:20.021900 4797 generic.go:334] "Generic (PLEG): container finished" podID="adde08ba-a127-4bfe-87fa-af192ad0a1de" containerID="d571c328ca280318d9d17eff94e09b267059c6e93071bfb1a77e0e11dc7a312b" exitCode=0 Feb 16 11:18:20 crc kubenswrapper[4797]: I0216 11:18:20.022023 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" event={"ID":"adde08ba-a127-4bfe-87fa-af192ad0a1de","Type":"ContainerDied","Data":"d571c328ca280318d9d17eff94e09b267059c6e93071bfb1a77e0e11dc7a312b"} Feb 16 11:18:20 crc kubenswrapper[4797]: I0216 11:18:20.022117 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" event={"ID":"adde08ba-a127-4bfe-87fa-af192ad0a1de","Type":"ContainerStarted","Data":"d4cb245b7377080f1f8cdda025a04e8326b7fb06b403a485e009328de1b2d0c4"} Feb 16 11:18:21 crc kubenswrapper[4797]: I0216 11:18:21.774435 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 16 11:18:21 crc kubenswrapper[4797]: I0216 11:18:21.775119 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 11:18:21 crc kubenswrapper[4797]: I0216 11:18:21.779366 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 16 11:18:21 crc kubenswrapper[4797]: I0216 11:18:21.780246 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 16 11:18:21 crc kubenswrapper[4797]: I0216 11:18:21.780523 4797 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-zrcmb" Feb 16 11:18:21 crc kubenswrapper[4797]: I0216 11:18:21.787748 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 11:18:21 crc kubenswrapper[4797]: I0216 11:18:21.898621 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-dfa8679a-f04f-4ae2-b03d-66642dbc7fde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfa8679a-f04f-4ae2-b03d-66642dbc7fde\") pod \"minio\" (UID: \"6b7d9488-5536-422c-801b-72714946b8d9\") " pod="minio-dev/minio" Feb 16 11:18:21 crc kubenswrapper[4797]: I0216 11:18:21.898807 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ks7wk\" (UniqueName: \"kubernetes.io/projected/6b7d9488-5536-422c-801b-72714946b8d9-kube-api-access-ks7wk\") pod \"minio\" (UID: \"6b7d9488-5536-422c-801b-72714946b8d9\") " pod="minio-dev/minio" Feb 16 11:18:22 crc kubenswrapper[4797]: I0216 11:18:22.000520 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ks7wk\" (UniqueName: \"kubernetes.io/projected/6b7d9488-5536-422c-801b-72714946b8d9-kube-api-access-ks7wk\") pod \"minio\" (UID: \"6b7d9488-5536-422c-801b-72714946b8d9\") " pod="minio-dev/minio" Feb 16 11:18:22 crc kubenswrapper[4797]: I0216 11:18:22.000597 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-dfa8679a-f04f-4ae2-b03d-66642dbc7fde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfa8679a-f04f-4ae2-b03d-66642dbc7fde\") pod \"minio\" (UID: \"6b7d9488-5536-422c-801b-72714946b8d9\") " pod="minio-dev/minio" Feb 16 11:18:22 crc kubenswrapper[4797]: I0216 11:18:22.004954 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:18:22 crc kubenswrapper[4797]: I0216 11:18:22.005038 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-dfa8679a-f04f-4ae2-b03d-66642dbc7fde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfa8679a-f04f-4ae2-b03d-66642dbc7fde\") pod \"minio\" (UID: \"6b7d9488-5536-422c-801b-72714946b8d9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a629703587327a58d44f36688f01e92c0a1ca42b867a43c6562b30d31ac83558/globalmount\"" pod="minio-dev/minio" Feb 16 11:18:22 crc kubenswrapper[4797]: I0216 11:18:22.026166 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-dfa8679a-f04f-4ae2-b03d-66642dbc7fde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-dfa8679a-f04f-4ae2-b03d-66642dbc7fde\") pod \"minio\" (UID: \"6b7d9488-5536-422c-801b-72714946b8d9\") " pod="minio-dev/minio" Feb 16 11:18:22 crc kubenswrapper[4797]: I0216 11:18:22.027153 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ks7wk\" (UniqueName: \"kubernetes.io/projected/6b7d9488-5536-422c-801b-72714946b8d9-kube-api-access-ks7wk\") pod \"minio\" (UID: \"6b7d9488-5536-422c-801b-72714946b8d9\") " pod="minio-dev/minio" Feb 16 11:18:22 crc kubenswrapper[4797]: I0216 11:18:22.093932 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 11:18:22 crc kubenswrapper[4797]: I0216 11:18:22.330142 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 11:18:22 crc kubenswrapper[4797]: W0216 11:18:22.333910 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b7d9488_5536_422c_801b_72714946b8d9.slice/crio-a123c6fb6afa000995dc93765238de88fe3a45cf713996901896771417f7c304 WatchSource:0}: Error finding container a123c6fb6afa000995dc93765238de88fe3a45cf713996901896771417f7c304: Status 404 returned error can't find the container with id a123c6fb6afa000995dc93765238de88fe3a45cf713996901896771417f7c304 Feb 16 11:18:23 crc kubenswrapper[4797]: I0216 11:18:23.052307 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"6b7d9488-5536-422c-801b-72714946b8d9","Type":"ContainerStarted","Data":"a123c6fb6afa000995dc93765238de88fe3a45cf713996901896771417f7c304"} Feb 16 11:18:25 crc kubenswrapper[4797]: I0216 11:18:25.062723 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"6b7d9488-5536-422c-801b-72714946b8d9","Type":"ContainerStarted","Data":"d59a2afd5b7e11022a32dbbc4cd45ee3f83c9aa24ae0648c4c593ed4346de1d6"} Feb 16 11:18:25 crc kubenswrapper[4797]: I0216 11:18:25.077182 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.640400934 podStartE2EDuration="7.077165867s" podCreationTimestamp="2026-02-16 11:18:18 +0000 UTC" firstStartedPulling="2026-02-16 11:18:22.336417176 +0000 UTC m=+697.056602156" lastFinishedPulling="2026-02-16 11:18:24.773182109 +0000 UTC m=+699.493367089" observedRunningTime="2026-02-16 11:18:25.075886283 +0000 UTC m=+699.796071263" watchObservedRunningTime="2026-02-16 11:18:25.077165867 +0000 UTC m=+699.797350847" Feb 16 11:18:26 crc kubenswrapper[4797]: I0216 11:18:26.069927 4797 generic.go:334] "Generic (PLEG): container finished" podID="adde08ba-a127-4bfe-87fa-af192ad0a1de" containerID="ce9f33747f9cadd7eb9f0242475453f152af0db109b639be9dc478ef1263ad5e" exitCode=0 Feb 16 11:18:26 crc kubenswrapper[4797]: I0216 11:18:26.070030 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" event={"ID":"adde08ba-a127-4bfe-87fa-af192ad0a1de","Type":"ContainerDied","Data":"ce9f33747f9cadd7eb9f0242475453f152af0db109b639be9dc478ef1263ad5e"} Feb 16 11:18:27 crc kubenswrapper[4797]: I0216 11:18:27.079224 4797 generic.go:334] "Generic (PLEG): container finished" podID="adde08ba-a127-4bfe-87fa-af192ad0a1de" containerID="4cdaf381533cd88531ef5f5eedb2d99238f37970217c529c2e1397d7824e6e0f" exitCode=0 Feb 16 11:18:27 crc kubenswrapper[4797]: I0216 11:18:27.079295 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" event={"ID":"adde08ba-a127-4bfe-87fa-af192ad0a1de","Type":"ContainerDied","Data":"4cdaf381533cd88531ef5f5eedb2d99238f37970217c529c2e1397d7824e6e0f"} Feb 16 11:18:28 crc kubenswrapper[4797]: I0216 11:18:28.313922 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" Feb 16 11:18:28 crc kubenswrapper[4797]: I0216 11:18:28.483493 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/adde08ba-a127-4bfe-87fa-af192ad0a1de-bundle\") pod \"adde08ba-a127-4bfe-87fa-af192ad0a1de\" (UID: \"adde08ba-a127-4bfe-87fa-af192ad0a1de\") " Feb 16 11:18:28 crc kubenswrapper[4797]: I0216 11:18:28.483629 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/adde08ba-a127-4bfe-87fa-af192ad0a1de-util\") pod \"adde08ba-a127-4bfe-87fa-af192ad0a1de\" (UID: \"adde08ba-a127-4bfe-87fa-af192ad0a1de\") " Feb 16 11:18:28 crc kubenswrapper[4797]: I0216 11:18:28.483670 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgpld\" (UniqueName: \"kubernetes.io/projected/adde08ba-a127-4bfe-87fa-af192ad0a1de-kube-api-access-qgpld\") pod \"adde08ba-a127-4bfe-87fa-af192ad0a1de\" (UID: \"adde08ba-a127-4bfe-87fa-af192ad0a1de\") " Feb 16 11:18:28 crc kubenswrapper[4797]: I0216 11:18:28.484760 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adde08ba-a127-4bfe-87fa-af192ad0a1de-bundle" (OuterVolumeSpecName: "bundle") pod "adde08ba-a127-4bfe-87fa-af192ad0a1de" (UID: "adde08ba-a127-4bfe-87fa-af192ad0a1de"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:18:28 crc kubenswrapper[4797]: I0216 11:18:28.490196 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adde08ba-a127-4bfe-87fa-af192ad0a1de-kube-api-access-qgpld" (OuterVolumeSpecName: "kube-api-access-qgpld") pod "adde08ba-a127-4bfe-87fa-af192ad0a1de" (UID: "adde08ba-a127-4bfe-87fa-af192ad0a1de"). InnerVolumeSpecName "kube-api-access-qgpld". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:18:28 crc kubenswrapper[4797]: I0216 11:18:28.493589 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adde08ba-a127-4bfe-87fa-af192ad0a1de-util" (OuterVolumeSpecName: "util") pod "adde08ba-a127-4bfe-87fa-af192ad0a1de" (UID: "adde08ba-a127-4bfe-87fa-af192ad0a1de"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:18:28 crc kubenswrapper[4797]: I0216 11:18:28.585192 4797 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/adde08ba-a127-4bfe-87fa-af192ad0a1de-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:18:28 crc kubenswrapper[4797]: I0216 11:18:28.585221 4797 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/adde08ba-a127-4bfe-87fa-af192ad0a1de-util\") on node \"crc\" DevicePath \"\"" Feb 16 11:18:28 crc kubenswrapper[4797]: I0216 11:18:28.585230 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgpld\" (UniqueName: \"kubernetes.io/projected/adde08ba-a127-4bfe-87fa-af192ad0a1de-kube-api-access-qgpld\") on node \"crc\" DevicePath \"\"" Feb 16 11:18:29 crc kubenswrapper[4797]: I0216 11:18:29.097659 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" event={"ID":"adde08ba-a127-4bfe-87fa-af192ad0a1de","Type":"ContainerDied","Data":"d4cb245b7377080f1f8cdda025a04e8326b7fb06b403a485e009328de1b2d0c4"} Feb 16 11:18:29 crc kubenswrapper[4797]: I0216 11:18:29.097991 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4cb245b7377080f1f8cdda025a04e8326b7fb06b403a485e009328de1b2d0c4" Feb 16 11:18:29 crc kubenswrapper[4797]: I0216 11:18:29.097741 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.566019 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp"] Feb 16 11:18:38 crc kubenswrapper[4797]: E0216 11:18:38.566619 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adde08ba-a127-4bfe-87fa-af192ad0a1de" containerName="pull" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.566632 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="adde08ba-a127-4bfe-87fa-af192ad0a1de" containerName="pull" Feb 16 11:18:38 crc kubenswrapper[4797]: E0216 11:18:38.566642 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adde08ba-a127-4bfe-87fa-af192ad0a1de" containerName="extract" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.566648 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="adde08ba-a127-4bfe-87fa-af192ad0a1de" containerName="extract" Feb 16 11:18:38 crc kubenswrapper[4797]: E0216 11:18:38.566663 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adde08ba-a127-4bfe-87fa-af192ad0a1de" containerName="util" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.566670 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="adde08ba-a127-4bfe-87fa-af192ad0a1de" containerName="util" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.566769 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="adde08ba-a127-4bfe-87fa-af192ad0a1de" containerName="extract" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.567340 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:38 crc kubenswrapper[4797]: W0216 11:18:38.571968 4797 reflector.go:561] object-"openshift-operators-redhat"/"loki-operator-manager-config": failed to list *v1.ConfigMap: configmaps "loki-operator-manager-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Feb 16 11:18:38 crc kubenswrapper[4797]: W0216 11:18:38.572015 4797 reflector.go:561] object-"openshift-operators-redhat"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Feb 16 11:18:38 crc kubenswrapper[4797]: E0216 11:18:38.572027 4797 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"loki-operator-manager-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"loki-operator-manager-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 11:18:38 crc kubenswrapper[4797]: E0216 11:18:38.572072 4797 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 11:18:38 crc kubenswrapper[4797]: W0216 11:18:38.571973 4797 reflector.go:561] object-"openshift-operators-redhat"/"loki-operator-metrics": failed to list *v1.Secret: secrets "loki-operator-metrics" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Feb 16 11:18:38 crc kubenswrapper[4797]: E0216 11:18:38.572153 4797 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"loki-operator-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"loki-operator-metrics\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 11:18:38 crc kubenswrapper[4797]: W0216 11:18:38.571976 4797 reflector.go:561] object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-q47zf": failed to list *v1.Secret: secrets "loki-operator-controller-manager-dockercfg-q47zf" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Feb 16 11:18:38 crc kubenswrapper[4797]: E0216 11:18:38.572183 4797 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"loki-operator-controller-manager-dockercfg-q47zf\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"loki-operator-controller-manager-dockercfg-q47zf\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 11:18:38 crc kubenswrapper[4797]: W0216 11:18:38.571975 4797 reflector.go:561] object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert": failed to list *v1.Secret: secrets "loki-operator-controller-manager-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Feb 16 11:18:38 crc kubenswrapper[4797]: E0216 11:18:38.572203 4797 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"loki-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"loki-operator-controller-manager-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 11:18:38 crc kubenswrapper[4797]: W0216 11:18:38.572917 4797 reflector.go:561] object-"openshift-operators-redhat"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-operators-redhat": no relationship found between node 'crc' and this object Feb 16 11:18:38 crc kubenswrapper[4797]: E0216 11:18:38.572962 4797 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators-redhat\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-operators-redhat\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.611979 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp"] Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.716119 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-apiservice-cert\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.716172 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.716204 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-webhook-cert\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.716228 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrf52\" (UniqueName: \"kubernetes.io/projected/d81adcdb-f1e8-4f65-b501-18b104ad7a02-kube-api-access-xrf52\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.716278 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d81adcdb-f1e8-4f65-b501-18b104ad7a02-manager-config\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.817917 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.817988 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-webhook-cert\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.818021 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrf52\" (UniqueName: \"kubernetes.io/projected/d81adcdb-f1e8-4f65-b501-18b104ad7a02-kube-api-access-xrf52\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.818091 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d81adcdb-f1e8-4f65-b501-18b104ad7a02-manager-config\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:38 crc kubenswrapper[4797]: I0216 11:18:38.818138 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-apiservice-cert\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:39 crc kubenswrapper[4797]: I0216 11:18:39.509983 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-q47zf" Feb 16 11:18:39 crc kubenswrapper[4797]: I0216 11:18:39.559155 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 16 11:18:39 crc kubenswrapper[4797]: I0216 11:18:39.569394 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/d81adcdb-f1e8-4f65-b501-18b104ad7a02-manager-config\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:39 crc kubenswrapper[4797]: I0216 11:18:39.586783 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 16 11:18:39 crc kubenswrapper[4797]: E0216 11:18:39.818503 4797 secret.go:188] Couldn't get secret openshift-operators-redhat/loki-operator-controller-manager-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 11:18:39 crc kubenswrapper[4797]: E0216 11:18:39.819104 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-apiservice-cert podName:d81adcdb-f1e8-4f65-b501-18b104ad7a02 nodeName:}" failed. No retries permitted until 2026-02-16 11:18:40.31908272 +0000 UTC m=+715.039267700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-apiservice-cert") pod "loki-operator-controller-manager-547985c4bd-snwnp" (UID: "d81adcdb-f1e8-4f65-b501-18b104ad7a02") : failed to sync secret cache: timed out waiting for the condition Feb 16 11:18:39 crc kubenswrapper[4797]: E0216 11:18:39.818546 4797 secret.go:188] Couldn't get secret openshift-operators-redhat/loki-operator-controller-manager-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 11:18:39 crc kubenswrapper[4797]: E0216 11:18:39.819253 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-webhook-cert podName:d81adcdb-f1e8-4f65-b501-18b104ad7a02 nodeName:}" failed. No retries permitted until 2026-02-16 11:18:40.319245595 +0000 UTC m=+715.039430565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-webhook-cert") pod "loki-operator-controller-manager-547985c4bd-snwnp" (UID: "d81adcdb-f1e8-4f65-b501-18b104ad7a02") : failed to sync secret cache: timed out waiting for the condition Feb 16 11:18:39 crc kubenswrapper[4797]: E0216 11:18:39.818546 4797 secret.go:188] Couldn't get secret openshift-operators-redhat/loki-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 16 11:18:39 crc kubenswrapper[4797]: E0216 11:18:39.819457 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-loki-operator-metrics-cert podName:d81adcdb-f1e8-4f65-b501-18b104ad7a02 nodeName:}" failed. No retries permitted until 2026-02-16 11:18:40.31944742 +0000 UTC m=+715.039632400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "loki-operator-metrics-cert" (UniqueName: "kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-loki-operator-metrics-cert") pod "loki-operator-controller-manager-547985c4bd-snwnp" (UID: "d81adcdb-f1e8-4f65-b501-18b104ad7a02") : failed to sync secret cache: timed out waiting for the condition Feb 16 11:18:39 crc kubenswrapper[4797]: I0216 11:18:39.833803 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 16 11:18:39 crc kubenswrapper[4797]: I0216 11:18:39.877014 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 16 11:18:39 crc kubenswrapper[4797]: I0216 11:18:39.883712 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrf52\" (UniqueName: \"kubernetes.io/projected/d81adcdb-f1e8-4f65-b501-18b104ad7a02-kube-api-access-xrf52\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:39 crc kubenswrapper[4797]: I0216 11:18:39.999547 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 16 11:18:40 crc kubenswrapper[4797]: I0216 11:18:40.340077 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-apiservice-cert\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:40 crc kubenswrapper[4797]: I0216 11:18:40.340141 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:40 crc kubenswrapper[4797]: I0216 11:18:40.340175 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-webhook-cert\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:40 crc kubenswrapper[4797]: I0216 11:18:40.344392 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-apiservice-cert\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:40 crc kubenswrapper[4797]: I0216 11:18:40.344421 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-webhook-cert\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:40 crc kubenswrapper[4797]: I0216 11:18:40.344845 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d81adcdb-f1e8-4f65-b501-18b104ad7a02-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-547985c4bd-snwnp\" (UID: \"d81adcdb-f1e8-4f65-b501-18b104ad7a02\") " pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:40 crc kubenswrapper[4797]: I0216 11:18:40.385722 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:40 crc kubenswrapper[4797]: I0216 11:18:40.665939 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp"] Feb 16 11:18:41 crc kubenswrapper[4797]: I0216 11:18:41.169452 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" event={"ID":"d81adcdb-f1e8-4f65-b501-18b104ad7a02","Type":"ContainerStarted","Data":"f35706955329939f6f40cc4c6928af796bc5b5a48d1b30bb57346dae4d1a8a61"} Feb 16 11:18:45 crc kubenswrapper[4797]: I0216 11:18:45.194497 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" event={"ID":"d81adcdb-f1e8-4f65-b501-18b104ad7a02","Type":"ContainerStarted","Data":"4959e478fc600b8d9d879deaea164428adf93e152b120bf9680ad8ae00099ec4"} Feb 16 11:18:53 crc kubenswrapper[4797]: I0216 11:18:53.255737 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" event={"ID":"d81adcdb-f1e8-4f65-b501-18b104ad7a02","Type":"ContainerStarted","Data":"f9128d011f673aabd108d489f2444f9d481d56d2e341cb7bca3c6f99d86edc50"} Feb 16 11:18:53 crc kubenswrapper[4797]: I0216 11:18:53.256272 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:53 crc kubenswrapper[4797]: I0216 11:18:53.257631 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" Feb 16 11:18:53 crc kubenswrapper[4797]: I0216 11:18:53.279041 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-547985c4bd-snwnp" podStartSLOduration=3.127881097 podStartE2EDuration="15.279013587s" podCreationTimestamp="2026-02-16 11:18:38 +0000 UTC" firstStartedPulling="2026-02-16 11:18:40.679417101 +0000 UTC m=+715.399602081" lastFinishedPulling="2026-02-16 11:18:52.830549551 +0000 UTC m=+727.550734571" observedRunningTime="2026-02-16 11:18:53.277354753 +0000 UTC m=+727.997539763" watchObservedRunningTime="2026-02-16 11:18:53.279013587 +0000 UTC m=+727.999198597" Feb 16 11:19:11 crc kubenswrapper[4797]: I0216 11:19:11.704088 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:19:11 crc kubenswrapper[4797]: I0216 11:19:11.704890 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:19:22 crc kubenswrapper[4797]: I0216 11:19:22.926737 4797 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 11:19:25 crc kubenswrapper[4797]: I0216 11:19:25.825363 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k"] Feb 16 11:19:25 crc kubenswrapper[4797]: I0216 11:19:25.827670 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" Feb 16 11:19:25 crc kubenswrapper[4797]: I0216 11:19:25.830453 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 11:19:25 crc kubenswrapper[4797]: I0216 11:19:25.842190 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k"] Feb 16 11:19:25 crc kubenswrapper[4797]: I0216 11:19:25.963529 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k\" (UID: \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" Feb 16 11:19:25 crc kubenswrapper[4797]: I0216 11:19:25.963600 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdl6f\" (UniqueName: \"kubernetes.io/projected/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-kube-api-access-xdl6f\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k\" (UID: \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" Feb 16 11:19:25 crc kubenswrapper[4797]: I0216 11:19:25.963708 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k\" (UID: \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" Feb 16 11:19:26 crc kubenswrapper[4797]: I0216 11:19:26.064879 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k\" (UID: \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" Feb 16 11:19:26 crc kubenswrapper[4797]: I0216 11:19:26.065036 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k\" (UID: \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" Feb 16 11:19:26 crc kubenswrapper[4797]: I0216 11:19:26.065072 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdl6f\" (UniqueName: \"kubernetes.io/projected/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-kube-api-access-xdl6f\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k\" (UID: \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" Feb 16 11:19:26 crc kubenswrapper[4797]: I0216 11:19:26.065790 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k\" (UID: \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" Feb 16 11:19:26 crc kubenswrapper[4797]: I0216 11:19:26.065869 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k\" (UID: \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" Feb 16 11:19:26 crc kubenswrapper[4797]: I0216 11:19:26.090399 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdl6f\" (UniqueName: \"kubernetes.io/projected/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-kube-api-access-xdl6f\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k\" (UID: \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" Feb 16 11:19:26 crc kubenswrapper[4797]: I0216 11:19:26.150215 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" Feb 16 11:19:26 crc kubenswrapper[4797]: I0216 11:19:26.391917 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k"] Feb 16 11:19:26 crc kubenswrapper[4797]: W0216 11:19:26.396781 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2dd227f_1ab6_480b_9f5d_5957cfba1d30.slice/crio-101a8c754cd39d2974908d0d1a4305ab5f8bf9206c4c75aa87df65b8d333753e WatchSource:0}: Error finding container 101a8c754cd39d2974908d0d1a4305ab5f8bf9206c4c75aa87df65b8d333753e: Status 404 returned error can't find the container with id 101a8c754cd39d2974908d0d1a4305ab5f8bf9206c4c75aa87df65b8d333753e Feb 16 11:19:26 crc kubenswrapper[4797]: I0216 11:19:26.572033 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" event={"ID":"d2dd227f-1ab6-480b-9f5d-5957cfba1d30","Type":"ContainerStarted","Data":"47be149bf1c2d2c8f7f77bcab10ec87b4f1494d2d9c6c2d71448e831b2df6b94"} Feb 16 11:19:26 crc kubenswrapper[4797]: I0216 11:19:26.572074 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" event={"ID":"d2dd227f-1ab6-480b-9f5d-5957cfba1d30","Type":"ContainerStarted","Data":"101a8c754cd39d2974908d0d1a4305ab5f8bf9206c4c75aa87df65b8d333753e"} Feb 16 11:19:27 crc kubenswrapper[4797]: I0216 11:19:27.580416 4797 generic.go:334] "Generic (PLEG): container finished" podID="d2dd227f-1ab6-480b-9f5d-5957cfba1d30" containerID="47be149bf1c2d2c8f7f77bcab10ec87b4f1494d2d9c6c2d71448e831b2df6b94" exitCode=0 Feb 16 11:19:27 crc kubenswrapper[4797]: I0216 11:19:27.580511 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" event={"ID":"d2dd227f-1ab6-480b-9f5d-5957cfba1d30","Type":"ContainerDied","Data":"47be149bf1c2d2c8f7f77bcab10ec87b4f1494d2d9c6c2d71448e831b2df6b94"} Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.359207 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w6lwv"] Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.360524 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.379741 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w6lwv"] Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.532214 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7225a9dd-da9e-4edd-adf5-13416a4f381c-catalog-content\") pod \"redhat-operators-w6lwv\" (UID: \"7225a9dd-da9e-4edd-adf5-13416a4f381c\") " pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.532354 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7225a9dd-da9e-4edd-adf5-13416a4f381c-utilities\") pod \"redhat-operators-w6lwv\" (UID: \"7225a9dd-da9e-4edd-adf5-13416a4f381c\") " pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.532613 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqv2l\" (UniqueName: \"kubernetes.io/projected/7225a9dd-da9e-4edd-adf5-13416a4f381c-kube-api-access-jqv2l\") pod \"redhat-operators-w6lwv\" (UID: \"7225a9dd-da9e-4edd-adf5-13416a4f381c\") " pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.600690 4797 generic.go:334] "Generic (PLEG): container finished" podID="d2dd227f-1ab6-480b-9f5d-5957cfba1d30" containerID="48a2649cb997570acbbf0c30918ed7383761c45910977796bab581e25d9770c9" exitCode=0 Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.600738 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" event={"ID":"d2dd227f-1ab6-480b-9f5d-5957cfba1d30","Type":"ContainerDied","Data":"48a2649cb997570acbbf0c30918ed7383761c45910977796bab581e25d9770c9"} Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.633958 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqv2l\" (UniqueName: \"kubernetes.io/projected/7225a9dd-da9e-4edd-adf5-13416a4f381c-kube-api-access-jqv2l\") pod \"redhat-operators-w6lwv\" (UID: \"7225a9dd-da9e-4edd-adf5-13416a4f381c\") " pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.634020 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7225a9dd-da9e-4edd-adf5-13416a4f381c-catalog-content\") pod \"redhat-operators-w6lwv\" (UID: \"7225a9dd-da9e-4edd-adf5-13416a4f381c\") " pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.634050 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7225a9dd-da9e-4edd-adf5-13416a4f381c-utilities\") pod \"redhat-operators-w6lwv\" (UID: \"7225a9dd-da9e-4edd-adf5-13416a4f381c\") " pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.634460 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7225a9dd-da9e-4edd-adf5-13416a4f381c-utilities\") pod \"redhat-operators-w6lwv\" (UID: \"7225a9dd-da9e-4edd-adf5-13416a4f381c\") " pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.634736 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7225a9dd-da9e-4edd-adf5-13416a4f381c-catalog-content\") pod \"redhat-operators-w6lwv\" (UID: \"7225a9dd-da9e-4edd-adf5-13416a4f381c\") " pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.652275 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqv2l\" (UniqueName: \"kubernetes.io/projected/7225a9dd-da9e-4edd-adf5-13416a4f381c-kube-api-access-jqv2l\") pod \"redhat-operators-w6lwv\" (UID: \"7225a9dd-da9e-4edd-adf5-13416a4f381c\") " pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:29 crc kubenswrapper[4797]: I0216 11:19:29.679837 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:30 crc kubenswrapper[4797]: I0216 11:19:30.134177 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w6lwv"] Feb 16 11:19:30 crc kubenswrapper[4797]: W0216 11:19:30.141710 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7225a9dd_da9e_4edd_adf5_13416a4f381c.slice/crio-be9affc72a799b6fc595c7fe3e06e993acd7646c16f68968e72059714813162d WatchSource:0}: Error finding container be9affc72a799b6fc595c7fe3e06e993acd7646c16f68968e72059714813162d: Status 404 returned error can't find the container with id be9affc72a799b6fc595c7fe3e06e993acd7646c16f68968e72059714813162d Feb 16 11:19:30 crc kubenswrapper[4797]: I0216 11:19:30.607456 4797 generic.go:334] "Generic (PLEG): container finished" podID="7225a9dd-da9e-4edd-adf5-13416a4f381c" containerID="c8a15530f37e8eb28d1344c7f05b24004333e9cde1d3a2c8255fdfecede70664" exitCode=0 Feb 16 11:19:30 crc kubenswrapper[4797]: I0216 11:19:30.607634 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6lwv" event={"ID":"7225a9dd-da9e-4edd-adf5-13416a4f381c","Type":"ContainerDied","Data":"c8a15530f37e8eb28d1344c7f05b24004333e9cde1d3a2c8255fdfecede70664"} Feb 16 11:19:30 crc kubenswrapper[4797]: I0216 11:19:30.607855 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6lwv" event={"ID":"7225a9dd-da9e-4edd-adf5-13416a4f381c","Type":"ContainerStarted","Data":"be9affc72a799b6fc595c7fe3e06e993acd7646c16f68968e72059714813162d"} Feb 16 11:19:30 crc kubenswrapper[4797]: I0216 11:19:30.610807 4797 generic.go:334] "Generic (PLEG): container finished" podID="d2dd227f-1ab6-480b-9f5d-5957cfba1d30" containerID="4e7c81a70d6c14244c6430be5b572d02097c6933b3f1967e7a635ff7108109c4" exitCode=0 Feb 16 11:19:30 crc kubenswrapper[4797]: I0216 11:19:30.610856 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" event={"ID":"d2dd227f-1ab6-480b-9f5d-5957cfba1d30","Type":"ContainerDied","Data":"4e7c81a70d6c14244c6430be5b572d02097c6933b3f1967e7a635ff7108109c4"} Feb 16 11:19:31 crc kubenswrapper[4797]: I0216 11:19:31.617911 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6lwv" event={"ID":"7225a9dd-da9e-4edd-adf5-13416a4f381c","Type":"ContainerStarted","Data":"9bc23f1dc1b667af561df1c9d08202920f6404aba811cbe14a948d34d3771ce4"} Feb 16 11:19:31 crc kubenswrapper[4797]: I0216 11:19:31.922740 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" Feb 16 11:19:31 crc kubenswrapper[4797]: I0216 11:19:31.968132 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdl6f\" (UniqueName: \"kubernetes.io/projected/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-kube-api-access-xdl6f\") pod \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\" (UID: \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\") " Feb 16 11:19:31 crc kubenswrapper[4797]: I0216 11:19:31.968200 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-bundle\") pod \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\" (UID: \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\") " Feb 16 11:19:31 crc kubenswrapper[4797]: I0216 11:19:31.968237 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-util\") pod \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\" (UID: \"d2dd227f-1ab6-480b-9f5d-5957cfba1d30\") " Feb 16 11:19:31 crc kubenswrapper[4797]: I0216 11:19:31.968921 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-bundle" (OuterVolumeSpecName: "bundle") pod "d2dd227f-1ab6-480b-9f5d-5957cfba1d30" (UID: "d2dd227f-1ab6-480b-9f5d-5957cfba1d30"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:19:31 crc kubenswrapper[4797]: I0216 11:19:31.972621 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-kube-api-access-xdl6f" (OuterVolumeSpecName: "kube-api-access-xdl6f") pod "d2dd227f-1ab6-480b-9f5d-5957cfba1d30" (UID: "d2dd227f-1ab6-480b-9f5d-5957cfba1d30"). InnerVolumeSpecName "kube-api-access-xdl6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:19:32 crc kubenswrapper[4797]: I0216 11:19:32.066056 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-util" (OuterVolumeSpecName: "util") pod "d2dd227f-1ab6-480b-9f5d-5957cfba1d30" (UID: "d2dd227f-1ab6-480b-9f5d-5957cfba1d30"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:19:32 crc kubenswrapper[4797]: I0216 11:19:32.069779 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdl6f\" (UniqueName: \"kubernetes.io/projected/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-kube-api-access-xdl6f\") on node \"crc\" DevicePath \"\"" Feb 16 11:19:32 crc kubenswrapper[4797]: I0216 11:19:32.069804 4797 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:19:32 crc kubenswrapper[4797]: I0216 11:19:32.069813 4797 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d2dd227f-1ab6-480b-9f5d-5957cfba1d30-util\") on node \"crc\" DevicePath \"\"" Feb 16 11:19:32 crc kubenswrapper[4797]: I0216 11:19:32.627450 4797 generic.go:334] "Generic (PLEG): container finished" podID="7225a9dd-da9e-4edd-adf5-13416a4f381c" containerID="9bc23f1dc1b667af561df1c9d08202920f6404aba811cbe14a948d34d3771ce4" exitCode=0 Feb 16 11:19:32 crc kubenswrapper[4797]: I0216 11:19:32.627546 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6lwv" event={"ID":"7225a9dd-da9e-4edd-adf5-13416a4f381c","Type":"ContainerDied","Data":"9bc23f1dc1b667af561df1c9d08202920f6404aba811cbe14a948d34d3771ce4"} Feb 16 11:19:32 crc kubenswrapper[4797]: I0216 11:19:32.630254 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" event={"ID":"d2dd227f-1ab6-480b-9f5d-5957cfba1d30","Type":"ContainerDied","Data":"101a8c754cd39d2974908d0d1a4305ab5f8bf9206c4c75aa87df65b8d333753e"} Feb 16 11:19:32 crc kubenswrapper[4797]: I0216 11:19:32.630310 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k" Feb 16 11:19:32 crc kubenswrapper[4797]: I0216 11:19:32.630318 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="101a8c754cd39d2974908d0d1a4305ab5f8bf9206c4c75aa87df65b8d333753e" Feb 16 11:19:33 crc kubenswrapper[4797]: I0216 11:19:33.637962 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6lwv" event={"ID":"7225a9dd-da9e-4edd-adf5-13416a4f381c","Type":"ContainerStarted","Data":"7a54fdc9ee4770dec5c957890f6bea7d2462e707a324bca0380c57903a119743"} Feb 16 11:19:33 crc kubenswrapper[4797]: I0216 11:19:33.656604 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w6lwv" podStartSLOduration=2.246564418 podStartE2EDuration="4.65657167s" podCreationTimestamp="2026-02-16 11:19:29 +0000 UTC" firstStartedPulling="2026-02-16 11:19:30.609273245 +0000 UTC m=+765.329458225" lastFinishedPulling="2026-02-16 11:19:33.019280477 +0000 UTC m=+767.739465477" observedRunningTime="2026-02-16 11:19:33.651244485 +0000 UTC m=+768.371429465" watchObservedRunningTime="2026-02-16 11:19:33.65657167 +0000 UTC m=+768.376756670" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.363126 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-qwfkw"] Feb 16 11:19:36 crc kubenswrapper[4797]: E0216 11:19:36.363711 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2dd227f-1ab6-480b-9f5d-5957cfba1d30" containerName="util" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.363726 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2dd227f-1ab6-480b-9f5d-5957cfba1d30" containerName="util" Feb 16 11:19:36 crc kubenswrapper[4797]: E0216 11:19:36.363740 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2dd227f-1ab6-480b-9f5d-5957cfba1d30" containerName="extract" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.363747 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2dd227f-1ab6-480b-9f5d-5957cfba1d30" containerName="extract" Feb 16 11:19:36 crc kubenswrapper[4797]: E0216 11:19:36.363759 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2dd227f-1ab6-480b-9f5d-5957cfba1d30" containerName="pull" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.363765 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2dd227f-1ab6-480b-9f5d-5957cfba1d30" containerName="pull" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.364452 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2dd227f-1ab6-480b-9f5d-5957cfba1d30" containerName="extract" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.364970 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-qwfkw" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.368079 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.368193 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-t4jk2" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.372464 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.376022 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-qwfkw"] Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.424485 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zcc4\" (UniqueName: \"kubernetes.io/projected/c4eb86f5-1f11-4785-a785-aae078cac6f4-kube-api-access-8zcc4\") pod \"nmstate-operator-694c9596b7-qwfkw\" (UID: \"c4eb86f5-1f11-4785-a785-aae078cac6f4\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-qwfkw" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.525698 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zcc4\" (UniqueName: \"kubernetes.io/projected/c4eb86f5-1f11-4785-a785-aae078cac6f4-kube-api-access-8zcc4\") pod \"nmstate-operator-694c9596b7-qwfkw\" (UID: \"c4eb86f5-1f11-4785-a785-aae078cac6f4\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-qwfkw" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.547569 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zcc4\" (UniqueName: \"kubernetes.io/projected/c4eb86f5-1f11-4785-a785-aae078cac6f4-kube-api-access-8zcc4\") pod \"nmstate-operator-694c9596b7-qwfkw\" (UID: \"c4eb86f5-1f11-4785-a785-aae078cac6f4\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-qwfkw" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.681841 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-qwfkw" Feb 16 11:19:36 crc kubenswrapper[4797]: I0216 11:19:36.930376 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-qwfkw"] Feb 16 11:19:36 crc kubenswrapper[4797]: W0216 11:19:36.935964 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4eb86f5_1f11_4785_a785_aae078cac6f4.slice/crio-d48dbc3a315dad855ebdc1bfca7cb89b896eeeb437fd14c40e13348a4266f6c2 WatchSource:0}: Error finding container d48dbc3a315dad855ebdc1bfca7cb89b896eeeb437fd14c40e13348a4266f6c2: Status 404 returned error can't find the container with id d48dbc3a315dad855ebdc1bfca7cb89b896eeeb437fd14c40e13348a4266f6c2 Feb 16 11:19:37 crc kubenswrapper[4797]: I0216 11:19:37.659820 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-qwfkw" event={"ID":"c4eb86f5-1f11-4785-a785-aae078cac6f4","Type":"ContainerStarted","Data":"d48dbc3a315dad855ebdc1bfca7cb89b896eeeb437fd14c40e13348a4266f6c2"} Feb 16 11:19:39 crc kubenswrapper[4797]: I0216 11:19:39.674033 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-qwfkw" event={"ID":"c4eb86f5-1f11-4785-a785-aae078cac6f4","Type":"ContainerStarted","Data":"6631fd6011bc014da1711af43d7aa8541071557223a5d024b567106fbfc210ca"} Feb 16 11:19:39 crc kubenswrapper[4797]: I0216 11:19:39.680242 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:39 crc kubenswrapper[4797]: I0216 11:19:39.680277 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:39 crc kubenswrapper[4797]: I0216 11:19:39.695070 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-qwfkw" podStartSLOduration=1.367162223 podStartE2EDuration="3.695052185s" podCreationTimestamp="2026-02-16 11:19:36 +0000 UTC" firstStartedPulling="2026-02-16 11:19:36.938422812 +0000 UTC m=+771.658607792" lastFinishedPulling="2026-02-16 11:19:39.266312774 +0000 UTC m=+773.986497754" observedRunningTime="2026-02-16 11:19:39.691538869 +0000 UTC m=+774.411723849" watchObservedRunningTime="2026-02-16 11:19:39.695052185 +0000 UTC m=+774.415237175" Feb 16 11:19:39 crc kubenswrapper[4797]: I0216 11:19:39.722900 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:40 crc kubenswrapper[4797]: I0216 11:19:40.734244 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:41 crc kubenswrapper[4797]: I0216 11:19:41.703976 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:19:41 crc kubenswrapper[4797]: I0216 11:19:41.704050 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:19:41 crc kubenswrapper[4797]: I0216 11:19:41.958731 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w6lwv"] Feb 16 11:19:42 crc kubenswrapper[4797]: I0216 11:19:42.689160 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w6lwv" podUID="7225a9dd-da9e-4edd-adf5-13416a4f381c" containerName="registry-server" containerID="cri-o://7a54fdc9ee4770dec5c957890f6bea7d2462e707a324bca0380c57903a119743" gracePeriod=2 Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.183112 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.319496 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7225a9dd-da9e-4edd-adf5-13416a4f381c-catalog-content\") pod \"7225a9dd-da9e-4edd-adf5-13416a4f381c\" (UID: \"7225a9dd-da9e-4edd-adf5-13416a4f381c\") " Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.319664 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqv2l\" (UniqueName: \"kubernetes.io/projected/7225a9dd-da9e-4edd-adf5-13416a4f381c-kube-api-access-jqv2l\") pod \"7225a9dd-da9e-4edd-adf5-13416a4f381c\" (UID: \"7225a9dd-da9e-4edd-adf5-13416a4f381c\") " Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.319731 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7225a9dd-da9e-4edd-adf5-13416a4f381c-utilities\") pod \"7225a9dd-da9e-4edd-adf5-13416a4f381c\" (UID: \"7225a9dd-da9e-4edd-adf5-13416a4f381c\") " Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.321648 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7225a9dd-da9e-4edd-adf5-13416a4f381c-utilities" (OuterVolumeSpecName: "utilities") pod "7225a9dd-da9e-4edd-adf5-13416a4f381c" (UID: "7225a9dd-da9e-4edd-adf5-13416a4f381c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.328541 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7225a9dd-da9e-4edd-adf5-13416a4f381c-kube-api-access-jqv2l" (OuterVolumeSpecName: "kube-api-access-jqv2l") pod "7225a9dd-da9e-4edd-adf5-13416a4f381c" (UID: "7225a9dd-da9e-4edd-adf5-13416a4f381c"). InnerVolumeSpecName "kube-api-access-jqv2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.421756 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqv2l\" (UniqueName: \"kubernetes.io/projected/7225a9dd-da9e-4edd-adf5-13416a4f381c-kube-api-access-jqv2l\") on node \"crc\" DevicePath \"\"" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.421971 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7225a9dd-da9e-4edd-adf5-13416a4f381c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.479241 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7225a9dd-da9e-4edd-adf5-13416a4f381c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7225a9dd-da9e-4edd-adf5-13416a4f381c" (UID: "7225a9dd-da9e-4edd-adf5-13416a4f381c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.523523 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7225a9dd-da9e-4edd-adf5-13416a4f381c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.707915 4797 generic.go:334] "Generic (PLEG): container finished" podID="7225a9dd-da9e-4edd-adf5-13416a4f381c" containerID="7a54fdc9ee4770dec5c957890f6bea7d2462e707a324bca0380c57903a119743" exitCode=0 Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.707994 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6lwv" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.708287 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6lwv" event={"ID":"7225a9dd-da9e-4edd-adf5-13416a4f381c","Type":"ContainerDied","Data":"7a54fdc9ee4770dec5c957890f6bea7d2462e707a324bca0380c57903a119743"} Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.708422 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6lwv" event={"ID":"7225a9dd-da9e-4edd-adf5-13416a4f381c","Type":"ContainerDied","Data":"be9affc72a799b6fc595c7fe3e06e993acd7646c16f68968e72059714813162d"} Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.708453 4797 scope.go:117] "RemoveContainer" containerID="7a54fdc9ee4770dec5c957890f6bea7d2462e707a324bca0380c57903a119743" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.729233 4797 scope.go:117] "RemoveContainer" containerID="9bc23f1dc1b667af561df1c9d08202920f6404aba811cbe14a948d34d3771ce4" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.739049 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w6lwv"] Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.748061 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w6lwv"] Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.761623 4797 scope.go:117] "RemoveContainer" containerID="c8a15530f37e8eb28d1344c7f05b24004333e9cde1d3a2c8255fdfecede70664" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.774910 4797 scope.go:117] "RemoveContainer" containerID="7a54fdc9ee4770dec5c957890f6bea7d2462e707a324bca0380c57903a119743" Feb 16 11:19:44 crc kubenswrapper[4797]: E0216 11:19:44.775912 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a54fdc9ee4770dec5c957890f6bea7d2462e707a324bca0380c57903a119743\": container with ID starting with 7a54fdc9ee4770dec5c957890f6bea7d2462e707a324bca0380c57903a119743 not found: ID does not exist" containerID="7a54fdc9ee4770dec5c957890f6bea7d2462e707a324bca0380c57903a119743" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.775944 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a54fdc9ee4770dec5c957890f6bea7d2462e707a324bca0380c57903a119743"} err="failed to get container status \"7a54fdc9ee4770dec5c957890f6bea7d2462e707a324bca0380c57903a119743\": rpc error: code = NotFound desc = could not find container \"7a54fdc9ee4770dec5c957890f6bea7d2462e707a324bca0380c57903a119743\": container with ID starting with 7a54fdc9ee4770dec5c957890f6bea7d2462e707a324bca0380c57903a119743 not found: ID does not exist" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.775969 4797 scope.go:117] "RemoveContainer" containerID="9bc23f1dc1b667af561df1c9d08202920f6404aba811cbe14a948d34d3771ce4" Feb 16 11:19:44 crc kubenswrapper[4797]: E0216 11:19:44.776266 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bc23f1dc1b667af561df1c9d08202920f6404aba811cbe14a948d34d3771ce4\": container with ID starting with 9bc23f1dc1b667af561df1c9d08202920f6404aba811cbe14a948d34d3771ce4 not found: ID does not exist" containerID="9bc23f1dc1b667af561df1c9d08202920f6404aba811cbe14a948d34d3771ce4" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.776288 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bc23f1dc1b667af561df1c9d08202920f6404aba811cbe14a948d34d3771ce4"} err="failed to get container status \"9bc23f1dc1b667af561df1c9d08202920f6404aba811cbe14a948d34d3771ce4\": rpc error: code = NotFound desc = could not find container \"9bc23f1dc1b667af561df1c9d08202920f6404aba811cbe14a948d34d3771ce4\": container with ID starting with 9bc23f1dc1b667af561df1c9d08202920f6404aba811cbe14a948d34d3771ce4 not found: ID does not exist" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.776302 4797 scope.go:117] "RemoveContainer" containerID="c8a15530f37e8eb28d1344c7f05b24004333e9cde1d3a2c8255fdfecede70664" Feb 16 11:19:44 crc kubenswrapper[4797]: E0216 11:19:44.776559 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8a15530f37e8eb28d1344c7f05b24004333e9cde1d3a2c8255fdfecede70664\": container with ID starting with c8a15530f37e8eb28d1344c7f05b24004333e9cde1d3a2c8255fdfecede70664 not found: ID does not exist" containerID="c8a15530f37e8eb28d1344c7f05b24004333e9cde1d3a2c8255fdfecede70664" Feb 16 11:19:44 crc kubenswrapper[4797]: I0216 11:19:44.776611 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a15530f37e8eb28d1344c7f05b24004333e9cde1d3a2c8255fdfecede70664"} err="failed to get container status \"c8a15530f37e8eb28d1344c7f05b24004333e9cde1d3a2c8255fdfecede70664\": rpc error: code = NotFound desc = could not find container \"c8a15530f37e8eb28d1344c7f05b24004333e9cde1d3a2c8255fdfecede70664\": container with ID starting with c8a15530f37e8eb28d1344c7f05b24004333e9cde1d3a2c8255fdfecede70664 not found: ID does not exist" Feb 16 11:19:45 crc kubenswrapper[4797]: I0216 11:19:45.992143 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7225a9dd-da9e-4edd-adf5-13416a4f381c" path="/var/lib/kubelet/pods/7225a9dd-da9e-4edd-adf5-13416a4f381c/volumes" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.806441 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-tv8gr"] Feb 16 11:19:46 crc kubenswrapper[4797]: E0216 11:19:46.806704 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7225a9dd-da9e-4edd-adf5-13416a4f381c" containerName="registry-server" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.806721 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="7225a9dd-da9e-4edd-adf5-13416a4f381c" containerName="registry-server" Feb 16 11:19:46 crc kubenswrapper[4797]: E0216 11:19:46.806737 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7225a9dd-da9e-4edd-adf5-13416a4f381c" containerName="extract-utilities" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.806747 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="7225a9dd-da9e-4edd-adf5-13416a4f381c" containerName="extract-utilities" Feb 16 11:19:46 crc kubenswrapper[4797]: E0216 11:19:46.806759 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7225a9dd-da9e-4edd-adf5-13416a4f381c" containerName="extract-content" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.806767 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="7225a9dd-da9e-4edd-adf5-13416a4f381c" containerName="extract-content" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.806907 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="7225a9dd-da9e-4edd-adf5-13416a4f381c" containerName="registry-server" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.807657 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tv8gr" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.811183 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-c82mn" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.819575 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj"] Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.820436 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.823079 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.826029 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-tv8gr"] Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.859152 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-bfllp"] Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.859956 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.862890 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj"] Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.958044 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nflqm\" (UniqueName: \"kubernetes.io/projected/4ee4a2b6-b1e2-43cb-9677-572351c9f2b6-kube-api-access-nflqm\") pod \"nmstate-metrics-58c85c668d-tv8gr\" (UID: \"4ee4a2b6-b1e2-43cb-9677-572351c9f2b6\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-tv8gr" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.958111 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcr94\" (UniqueName: \"kubernetes.io/projected/de893082-511b-4ef6-a57a-172c7f44f063-kube-api-access-mcr94\") pod \"nmstate-handler-bfllp\" (UID: \"de893082-511b-4ef6-a57a-172c7f44f063\") " pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.958141 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/de893082-511b-4ef6-a57a-172c7f44f063-nmstate-lock\") pod \"nmstate-handler-bfllp\" (UID: \"de893082-511b-4ef6-a57a-172c7f44f063\") " pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.958169 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnfz7\" (UniqueName: \"kubernetes.io/projected/d2646d1c-e478-4ffb-916e-8feb7e020022-kube-api-access-tnfz7\") pod \"nmstate-webhook-866bcb46dc-cbtgj\" (UID: \"d2646d1c-e478-4ffb-916e-8feb7e020022\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.958206 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/de893082-511b-4ef6-a57a-172c7f44f063-ovs-socket\") pod \"nmstate-handler-bfllp\" (UID: \"de893082-511b-4ef6-a57a-172c7f44f063\") " pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.958228 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d2646d1c-e478-4ffb-916e-8feb7e020022-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-cbtgj\" (UID: \"d2646d1c-e478-4ffb-916e-8feb7e020022\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj" Feb 16 11:19:46 crc kubenswrapper[4797]: I0216 11:19:46.958285 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/de893082-511b-4ef6-a57a-172c7f44f063-dbus-socket\") pod \"nmstate-handler-bfllp\" (UID: \"de893082-511b-4ef6-a57a-172c7f44f063\") " pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.028648 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n"] Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.029320 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.031769 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.031774 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.031787 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-qvbqh" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.040661 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n"] Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.059371 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/de893082-511b-4ef6-a57a-172c7f44f063-dbus-socket\") pod \"nmstate-handler-bfllp\" (UID: \"de893082-511b-4ef6-a57a-172c7f44f063\") " pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.059427 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nflqm\" (UniqueName: \"kubernetes.io/projected/4ee4a2b6-b1e2-43cb-9677-572351c9f2b6-kube-api-access-nflqm\") pod \"nmstate-metrics-58c85c668d-tv8gr\" (UID: \"4ee4a2b6-b1e2-43cb-9677-572351c9f2b6\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-tv8gr" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.059465 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcr94\" (UniqueName: \"kubernetes.io/projected/de893082-511b-4ef6-a57a-172c7f44f063-kube-api-access-mcr94\") pod \"nmstate-handler-bfllp\" (UID: \"de893082-511b-4ef6-a57a-172c7f44f063\") " pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.059480 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/de893082-511b-4ef6-a57a-172c7f44f063-nmstate-lock\") pod \"nmstate-handler-bfllp\" (UID: \"de893082-511b-4ef6-a57a-172c7f44f063\") " pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.059501 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnfz7\" (UniqueName: \"kubernetes.io/projected/d2646d1c-e478-4ffb-916e-8feb7e020022-kube-api-access-tnfz7\") pod \"nmstate-webhook-866bcb46dc-cbtgj\" (UID: \"d2646d1c-e478-4ffb-916e-8feb7e020022\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.059525 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/de893082-511b-4ef6-a57a-172c7f44f063-ovs-socket\") pod \"nmstate-handler-bfllp\" (UID: \"de893082-511b-4ef6-a57a-172c7f44f063\") " pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.059540 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d2646d1c-e478-4ffb-916e-8feb7e020022-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-cbtgj\" (UID: \"d2646d1c-e478-4ffb-916e-8feb7e020022\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.060577 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/de893082-511b-4ef6-a57a-172c7f44f063-dbus-socket\") pod \"nmstate-handler-bfllp\" (UID: \"de893082-511b-4ef6-a57a-172c7f44f063\") " pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.060733 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/de893082-511b-4ef6-a57a-172c7f44f063-nmstate-lock\") pod \"nmstate-handler-bfllp\" (UID: \"de893082-511b-4ef6-a57a-172c7f44f063\") " pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.060764 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/de893082-511b-4ef6-a57a-172c7f44f063-ovs-socket\") pod \"nmstate-handler-bfllp\" (UID: \"de893082-511b-4ef6-a57a-172c7f44f063\") " pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.074755 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d2646d1c-e478-4ffb-916e-8feb7e020022-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-cbtgj\" (UID: \"d2646d1c-e478-4ffb-916e-8feb7e020022\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.104652 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnfz7\" (UniqueName: \"kubernetes.io/projected/d2646d1c-e478-4ffb-916e-8feb7e020022-kube-api-access-tnfz7\") pod \"nmstate-webhook-866bcb46dc-cbtgj\" (UID: \"d2646d1c-e478-4ffb-916e-8feb7e020022\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.104825 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcr94\" (UniqueName: \"kubernetes.io/projected/de893082-511b-4ef6-a57a-172c7f44f063-kube-api-access-mcr94\") pod \"nmstate-handler-bfllp\" (UID: \"de893082-511b-4ef6-a57a-172c7f44f063\") " pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.105207 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nflqm\" (UniqueName: \"kubernetes.io/projected/4ee4a2b6-b1e2-43cb-9677-572351c9f2b6-kube-api-access-nflqm\") pod \"nmstate-metrics-58c85c668d-tv8gr\" (UID: \"4ee4a2b6-b1e2-43cb-9677-572351c9f2b6\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-tv8gr" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.124327 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tv8gr" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.137071 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.161197 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/93bb854d-0f24-4def-95d9-17a1efbd0afa-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-8j77n\" (UID: \"93bb854d-0f24-4def-95d9-17a1efbd0afa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.161288 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqkpr\" (UniqueName: \"kubernetes.io/projected/93bb854d-0f24-4def-95d9-17a1efbd0afa-kube-api-access-xqkpr\") pod \"nmstate-console-plugin-5c78fc5d65-8j77n\" (UID: \"93bb854d-0f24-4def-95d9-17a1efbd0afa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.161315 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/93bb854d-0f24-4def-95d9-17a1efbd0afa-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-8j77n\" (UID: \"93bb854d-0f24-4def-95d9-17a1efbd0afa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.180980 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.223795 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7c8f4599f5-xz6gt"] Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.231859 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.254111 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c8f4599f5-xz6gt"] Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.264711 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/93bb854d-0f24-4def-95d9-17a1efbd0afa-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-8j77n\" (UID: \"93bb854d-0f24-4def-95d9-17a1efbd0afa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.264765 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqkpr\" (UniqueName: \"kubernetes.io/projected/93bb854d-0f24-4def-95d9-17a1efbd0afa-kube-api-access-xqkpr\") pod \"nmstate-console-plugin-5c78fc5d65-8j77n\" (UID: \"93bb854d-0f24-4def-95d9-17a1efbd0afa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.264788 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/93bb854d-0f24-4def-95d9-17a1efbd0afa-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-8j77n\" (UID: \"93bb854d-0f24-4def-95d9-17a1efbd0afa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.265860 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/93bb854d-0f24-4def-95d9-17a1efbd0afa-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-8j77n\" (UID: \"93bb854d-0f24-4def-95d9-17a1efbd0afa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.270746 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/93bb854d-0f24-4def-95d9-17a1efbd0afa-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-8j77n\" (UID: \"93bb854d-0f24-4def-95d9-17a1efbd0afa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.288288 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqkpr\" (UniqueName: \"kubernetes.io/projected/93bb854d-0f24-4def-95d9-17a1efbd0afa-kube-api-access-xqkpr\") pod \"nmstate-console-plugin-5c78fc5d65-8j77n\" (UID: \"93bb854d-0f24-4def-95d9-17a1efbd0afa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.341892 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.365832 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-service-ca\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.365883 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-console-oauth-config\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.365914 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-oauth-serving-cert\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.365938 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-console-serving-cert\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.365964 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-trusted-ca-bundle\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.365989 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g96d\" (UniqueName: \"kubernetes.io/projected/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-kube-api-access-6g96d\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.366007 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-console-config\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.408075 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj"] Feb 16 11:19:47 crc kubenswrapper[4797]: W0216 11:19:47.413206 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2646d1c_e478_4ffb_916e_8feb7e020022.slice/crio-83dd0461e4b7b4fc6f2074e7ae98669d6f2e83bea80332d7c892c65ec3bafd32 WatchSource:0}: Error finding container 83dd0461e4b7b4fc6f2074e7ae98669d6f2e83bea80332d7c892c65ec3bafd32: Status 404 returned error can't find the container with id 83dd0461e4b7b4fc6f2074e7ae98669d6f2e83bea80332d7c892c65ec3bafd32 Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.470703 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-trusted-ca-bundle\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.470754 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g96d\" (UniqueName: \"kubernetes.io/projected/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-kube-api-access-6g96d\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.470774 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-console-config\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.470838 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-service-ca\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.470865 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-console-oauth-config\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.470882 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-oauth-serving-cert\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.470898 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-console-serving-cert\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.471893 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-trusted-ca-bundle\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.472279 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-service-ca\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.472919 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-console-config\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.473454 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-oauth-serving-cert\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.474749 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-console-serving-cert\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.479251 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-console-oauth-config\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.486947 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g96d\" (UniqueName: \"kubernetes.io/projected/8baf7c93-6ae1-47cb-8708-ec92e65e9d63-kube-api-access-6g96d\") pod \"console-7c8f4599f5-xz6gt\" (UID: \"8baf7c93-6ae1-47cb-8708-ec92e65e9d63\") " pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.553038 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.669166 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-tv8gr"] Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.729167 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bfllp" event={"ID":"de893082-511b-4ef6-a57a-172c7f44f063","Type":"ContainerStarted","Data":"bb66a0d74586cdf2281d198eb7cf0260e1ed0b3599b48a22b2d9707719faafca"} Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.730440 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tv8gr" event={"ID":"4ee4a2b6-b1e2-43cb-9677-572351c9f2b6","Type":"ContainerStarted","Data":"aa2eea99ac75312d1eb5a0b3979ae0877a9b84a7b1243d9d3e68c38b0dc60a15"} Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.731779 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj" event={"ID":"d2646d1c-e478-4ffb-916e-8feb7e020022","Type":"ContainerStarted","Data":"83dd0461e4b7b4fc6f2074e7ae98669d6f2e83bea80332d7c892c65ec3bafd32"} Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.744892 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n"] Feb 16 11:19:47 crc kubenswrapper[4797]: W0216 11:19:47.751141 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93bb854d_0f24_4def_95d9_17a1efbd0afa.slice/crio-75ed5b3fe765338fcd50900cb239b3a85e7fb40d21a1816eb009c4ed09fde134 WatchSource:0}: Error finding container 75ed5b3fe765338fcd50900cb239b3a85e7fb40d21a1816eb009c4ed09fde134: Status 404 returned error can't find the container with id 75ed5b3fe765338fcd50900cb239b3a85e7fb40d21a1816eb009c4ed09fde134 Feb 16 11:19:47 crc kubenswrapper[4797]: I0216 11:19:47.959432 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c8f4599f5-xz6gt"] Feb 16 11:19:47 crc kubenswrapper[4797]: W0216 11:19:47.965763 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8baf7c93_6ae1_47cb_8708_ec92e65e9d63.slice/crio-efb2e4b969e41708d7f87b922ed19bbdd62192c00203e43c2df8144730582651 WatchSource:0}: Error finding container efb2e4b969e41708d7f87b922ed19bbdd62192c00203e43c2df8144730582651: Status 404 returned error can't find the container with id efb2e4b969e41708d7f87b922ed19bbdd62192c00203e43c2df8144730582651 Feb 16 11:19:48 crc kubenswrapper[4797]: I0216 11:19:48.746849 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" event={"ID":"93bb854d-0f24-4def-95d9-17a1efbd0afa","Type":"ContainerStarted","Data":"75ed5b3fe765338fcd50900cb239b3a85e7fb40d21a1816eb009c4ed09fde134"} Feb 16 11:19:48 crc kubenswrapper[4797]: I0216 11:19:48.750734 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c8f4599f5-xz6gt" event={"ID":"8baf7c93-6ae1-47cb-8708-ec92e65e9d63","Type":"ContainerStarted","Data":"ec9e54eb434937d52777f9a05a81d4297e4981b2d15008954cb84a3e0b8a61b8"} Feb 16 11:19:48 crc kubenswrapper[4797]: I0216 11:19:48.750784 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c8f4599f5-xz6gt" event={"ID":"8baf7c93-6ae1-47cb-8708-ec92e65e9d63","Type":"ContainerStarted","Data":"efb2e4b969e41708d7f87b922ed19bbdd62192c00203e43c2df8144730582651"} Feb 16 11:19:50 crc kubenswrapper[4797]: I0216 11:19:50.771446 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tv8gr" event={"ID":"4ee4a2b6-b1e2-43cb-9677-572351c9f2b6","Type":"ContainerStarted","Data":"8a255767bd40567a6d44c03f0653306f8252f59fec3c2c70df4eadcdedd5348a"} Feb 16 11:19:50 crc kubenswrapper[4797]: I0216 11:19:50.773610 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj" event={"ID":"d2646d1c-e478-4ffb-916e-8feb7e020022","Type":"ContainerStarted","Data":"ec8892825ffae4f3bcc0b46d6e08290d70a060ed31f095360c2334f0f44219d0"} Feb 16 11:19:50 crc kubenswrapper[4797]: I0216 11:19:50.773861 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj" Feb 16 11:19:50 crc kubenswrapper[4797]: I0216 11:19:50.775904 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" event={"ID":"93bb854d-0f24-4def-95d9-17a1efbd0afa","Type":"ContainerStarted","Data":"b73925dc615391ebe2e00870dbac72585b10e041bf73e9bc32a2dcf382495d3f"} Feb 16 11:19:50 crc kubenswrapper[4797]: I0216 11:19:50.777445 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bfllp" event={"ID":"de893082-511b-4ef6-a57a-172c7f44f063","Type":"ContainerStarted","Data":"55cb71f9b13a2ad1f1eed50d9fddd6350e787967d63288d01fe36b93fa6895b2"} Feb 16 11:19:50 crc kubenswrapper[4797]: I0216 11:19:50.777832 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:50 crc kubenswrapper[4797]: I0216 11:19:50.798873 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7c8f4599f5-xz6gt" podStartSLOduration=3.798846723 podStartE2EDuration="3.798846723s" podCreationTimestamp="2026-02-16 11:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:19:48.775198231 +0000 UTC m=+783.495383231" watchObservedRunningTime="2026-02-16 11:19:50.798846723 +0000 UTC m=+785.519031733" Feb 16 11:19:50 crc kubenswrapper[4797]: I0216 11:19:50.803442 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj" podStartSLOduration=1.81090112 podStartE2EDuration="4.803420207s" podCreationTimestamp="2026-02-16 11:19:46 +0000 UTC" firstStartedPulling="2026-02-16 11:19:47.421083086 +0000 UTC m=+782.141268066" lastFinishedPulling="2026-02-16 11:19:50.413602173 +0000 UTC m=+785.133787153" observedRunningTime="2026-02-16 11:19:50.797260789 +0000 UTC m=+785.517445769" watchObservedRunningTime="2026-02-16 11:19:50.803420207 +0000 UTC m=+785.523605227" Feb 16 11:19:50 crc kubenswrapper[4797]: I0216 11:19:50.823677 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-bfllp" podStartSLOduration=1.612841172 podStartE2EDuration="4.823659846s" podCreationTimestamp="2026-02-16 11:19:46 +0000 UTC" firstStartedPulling="2026-02-16 11:19:47.221937259 +0000 UTC m=+781.942122239" lastFinishedPulling="2026-02-16 11:19:50.432755933 +0000 UTC m=+785.152940913" observedRunningTime="2026-02-16 11:19:50.822935697 +0000 UTC m=+785.543120717" watchObservedRunningTime="2026-02-16 11:19:50.823659846 +0000 UTC m=+785.543844836" Feb 16 11:19:50 crc kubenswrapper[4797]: I0216 11:19:50.855454 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-8j77n" podStartSLOduration=2.195314317 podStartE2EDuration="4.855433379s" podCreationTimestamp="2026-02-16 11:19:46 +0000 UTC" firstStartedPulling="2026-02-16 11:19:47.753359377 +0000 UTC m=+782.473544357" lastFinishedPulling="2026-02-16 11:19:50.413478419 +0000 UTC m=+785.133663419" observedRunningTime="2026-02-16 11:19:50.843253478 +0000 UTC m=+785.563438488" watchObservedRunningTime="2026-02-16 11:19:50.855433379 +0000 UTC m=+785.575618359" Feb 16 11:19:53 crc kubenswrapper[4797]: I0216 11:19:53.799685 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tv8gr" event={"ID":"4ee4a2b6-b1e2-43cb-9677-572351c9f2b6","Type":"ContainerStarted","Data":"91f43289547721807cf181b5693c43568df2a8abdf46d502cf2853d4e95af4c6"} Feb 16 11:19:53 crc kubenswrapper[4797]: I0216 11:19:53.828411 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-tv8gr" podStartSLOduration=2.429210866 podStartE2EDuration="7.828386254s" podCreationTimestamp="2026-02-16 11:19:46 +0000 UTC" firstStartedPulling="2026-02-16 11:19:47.694376306 +0000 UTC m=+782.414561286" lastFinishedPulling="2026-02-16 11:19:53.093551684 +0000 UTC m=+787.813736674" observedRunningTime="2026-02-16 11:19:53.824191351 +0000 UTC m=+788.544376371" watchObservedRunningTime="2026-02-16 11:19:53.828386254 +0000 UTC m=+788.548571274" Feb 16 11:19:57 crc kubenswrapper[4797]: I0216 11:19:57.223933 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-bfllp" Feb 16 11:19:57 crc kubenswrapper[4797]: I0216 11:19:57.554059 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:57 crc kubenswrapper[4797]: I0216 11:19:57.554129 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:57 crc kubenswrapper[4797]: I0216 11:19:57.559348 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:57 crc kubenswrapper[4797]: I0216 11:19:57.833793 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7c8f4599f5-xz6gt" Feb 16 11:19:57 crc kubenswrapper[4797]: I0216 11:19:57.894360 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4d5np"] Feb 16 11:20:07 crc kubenswrapper[4797]: I0216 11:20:07.143198 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cbtgj" Feb 16 11:20:11 crc kubenswrapper[4797]: I0216 11:20:11.703321 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:20:11 crc kubenswrapper[4797]: I0216 11:20:11.703731 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:20:11 crc kubenswrapper[4797]: I0216 11:20:11.703785 4797 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:20:11 crc kubenswrapper[4797]: I0216 11:20:11.704314 4797 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7af1c89447ff7ab76e09ca5508cebe1098d580ac409a9bf112a6d6541596109"} pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:20:11 crc kubenswrapper[4797]: I0216 11:20:11.704369 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" containerID="cri-o://e7af1c89447ff7ab76e09ca5508cebe1098d580ac409a9bf112a6d6541596109" gracePeriod=600 Feb 16 11:20:11 crc kubenswrapper[4797]: I0216 11:20:11.953164 4797 generic.go:334] "Generic (PLEG): container finished" podID="128f4e85-fd17-4281-97d2-872fda792b21" containerID="e7af1c89447ff7ab76e09ca5508cebe1098d580ac409a9bf112a6d6541596109" exitCode=0 Feb 16 11:20:11 crc kubenswrapper[4797]: I0216 11:20:11.953229 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerDied","Data":"e7af1c89447ff7ab76e09ca5508cebe1098d580ac409a9bf112a6d6541596109"} Feb 16 11:20:11 crc kubenswrapper[4797]: I0216 11:20:11.953571 4797 scope.go:117] "RemoveContainer" containerID="21e93e8275d66acf9ae0c0ae90f1582953ed0ef2b28e1d32c012b6372bb207c7" Feb 16 11:20:12 crc kubenswrapper[4797]: I0216 11:20:12.961201 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"dbd4bdb440a4910da1233a40f8d1a68f6e489c128c5069841c232e62039aea64"} Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.001466 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n"] Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.003336 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.005684 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.030565 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n"] Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.034970 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4637f00b-8997-47b5-8164-e0ee843a75bd-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n\" (UID: \"4637f00b-8997-47b5-8164-e0ee843a75bd\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.035129 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgldf\" (UniqueName: \"kubernetes.io/projected/4637f00b-8997-47b5-8164-e0ee843a75bd-kube-api-access-zgldf\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n\" (UID: \"4637f00b-8997-47b5-8164-e0ee843a75bd\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.035200 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4637f00b-8997-47b5-8164-e0ee843a75bd-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n\" (UID: \"4637f00b-8997-47b5-8164-e0ee843a75bd\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.137440 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgldf\" (UniqueName: \"kubernetes.io/projected/4637f00b-8997-47b5-8164-e0ee843a75bd-kube-api-access-zgldf\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n\" (UID: \"4637f00b-8997-47b5-8164-e0ee843a75bd\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.137720 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4637f00b-8997-47b5-8164-e0ee843a75bd-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n\" (UID: \"4637f00b-8997-47b5-8164-e0ee843a75bd\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.137882 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4637f00b-8997-47b5-8164-e0ee843a75bd-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n\" (UID: \"4637f00b-8997-47b5-8164-e0ee843a75bd\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.138211 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4637f00b-8997-47b5-8164-e0ee843a75bd-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n\" (UID: \"4637f00b-8997-47b5-8164-e0ee843a75bd\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.138343 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4637f00b-8997-47b5-8164-e0ee843a75bd-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n\" (UID: \"4637f00b-8997-47b5-8164-e0ee843a75bd\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.158331 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgldf\" (UniqueName: \"kubernetes.io/projected/4637f00b-8997-47b5-8164-e0ee843a75bd-kube-api-access-zgldf\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n\" (UID: \"4637f00b-8997-47b5-8164-e0ee843a75bd\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.321463 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.585231 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n"] Feb 16 11:20:22 crc kubenswrapper[4797]: I0216 11:20:22.947205 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4d5np" podUID="61891ace-57b4-446d-afb5-cec9848da89a" containerName="console" containerID="cri-o://4c3895fc2bb657dcb402912473d4dede9d7ac3c0128601a9720958b207ff6644" gracePeriod=15 Feb 16 11:20:23 crc kubenswrapper[4797]: I0216 11:20:23.035207 4797 generic.go:334] "Generic (PLEG): container finished" podID="4637f00b-8997-47b5-8164-e0ee843a75bd" containerID="59af492f43c00e319e64ae607c36480fc86199556a6d58f8481b5a69ac9df1ac" exitCode=0 Feb 16 11:20:23 crc kubenswrapper[4797]: I0216 11:20:23.035267 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" event={"ID":"4637f00b-8997-47b5-8164-e0ee843a75bd","Type":"ContainerDied","Data":"59af492f43c00e319e64ae607c36480fc86199556a6d58f8481b5a69ac9df1ac"} Feb 16 11:20:23 crc kubenswrapper[4797]: I0216 11:20:23.035301 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" event={"ID":"4637f00b-8997-47b5-8164-e0ee843a75bd","Type":"ContainerStarted","Data":"0d7fd863d5c7c5290cdd29f54e290d79a3ea747e8999ef6b6cc89bef7e931180"} Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.044726 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4d5np_61891ace-57b4-446d-afb5-cec9848da89a/console/0.log" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.044998 4797 generic.go:334] "Generic (PLEG): container finished" podID="61891ace-57b4-446d-afb5-cec9848da89a" containerID="4c3895fc2bb657dcb402912473d4dede9d7ac3c0128601a9720958b207ff6644" exitCode=2 Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.045029 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4d5np" event={"ID":"61891ace-57b4-446d-afb5-cec9848da89a","Type":"ContainerDied","Data":"4c3895fc2bb657dcb402912473d4dede9d7ac3c0128601a9720958b207ff6644"} Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.666534 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4d5np_61891ace-57b4-446d-afb5-cec9848da89a/console/0.log" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.666726 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.770462 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97x74\" (UniqueName: \"kubernetes.io/projected/61891ace-57b4-446d-afb5-cec9848da89a-kube-api-access-97x74\") pod \"61891ace-57b4-446d-afb5-cec9848da89a\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.770592 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61891ace-57b4-446d-afb5-cec9848da89a-console-oauth-config\") pod \"61891ace-57b4-446d-afb5-cec9848da89a\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.770635 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-oauth-serving-cert\") pod \"61891ace-57b4-446d-afb5-cec9848da89a\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.770675 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-console-config\") pod \"61891ace-57b4-446d-afb5-cec9848da89a\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.770731 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-service-ca\") pod \"61891ace-57b4-446d-afb5-cec9848da89a\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.770791 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-trusted-ca-bundle\") pod \"61891ace-57b4-446d-afb5-cec9848da89a\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.770847 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61891ace-57b4-446d-afb5-cec9848da89a-console-serving-cert\") pod \"61891ace-57b4-446d-afb5-cec9848da89a\" (UID: \"61891ace-57b4-446d-afb5-cec9848da89a\") " Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.771614 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "61891ace-57b4-446d-afb5-cec9848da89a" (UID: "61891ace-57b4-446d-afb5-cec9848da89a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.771731 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-console-config" (OuterVolumeSpecName: "console-config") pod "61891ace-57b4-446d-afb5-cec9848da89a" (UID: "61891ace-57b4-446d-afb5-cec9848da89a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.771743 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "61891ace-57b4-446d-afb5-cec9848da89a" (UID: "61891ace-57b4-446d-afb5-cec9848da89a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.772252 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-service-ca" (OuterVolumeSpecName: "service-ca") pod "61891ace-57b4-446d-afb5-cec9848da89a" (UID: "61891ace-57b4-446d-afb5-cec9848da89a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.778385 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61891ace-57b4-446d-afb5-cec9848da89a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "61891ace-57b4-446d-afb5-cec9848da89a" (UID: "61891ace-57b4-446d-afb5-cec9848da89a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.779644 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61891ace-57b4-446d-afb5-cec9848da89a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "61891ace-57b4-446d-afb5-cec9848da89a" (UID: "61891ace-57b4-446d-afb5-cec9848da89a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.780752 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61891ace-57b4-446d-afb5-cec9848da89a-kube-api-access-97x74" (OuterVolumeSpecName: "kube-api-access-97x74") pod "61891ace-57b4-446d-afb5-cec9848da89a" (UID: "61891ace-57b4-446d-afb5-cec9848da89a"). InnerVolumeSpecName "kube-api-access-97x74". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.872767 4797 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/61891ace-57b4-446d-afb5-cec9848da89a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.872844 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97x74\" (UniqueName: \"kubernetes.io/projected/61891ace-57b4-446d-afb5-cec9848da89a-kube-api-access-97x74\") on node \"crc\" DevicePath \"\"" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.872873 4797 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/61891ace-57b4-446d-afb5-cec9848da89a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.872898 4797 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.872924 4797 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.872948 4797 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 11:20:24 crc kubenswrapper[4797]: I0216 11:20:24.872971 4797 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61891ace-57b4-446d-afb5-cec9848da89a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:20:25 crc kubenswrapper[4797]: I0216 11:20:25.054432 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4d5np_61891ace-57b4-446d-afb5-cec9848da89a/console/0.log" Feb 16 11:20:25 crc kubenswrapper[4797]: I0216 11:20:25.055242 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4d5np" event={"ID":"61891ace-57b4-446d-afb5-cec9848da89a","Type":"ContainerDied","Data":"e2ccfde0b7bee3cda4fe60ce9a6fa75995b17bc372202e7314c4ae0e0edd8ffe"} Feb 16 11:20:25 crc kubenswrapper[4797]: I0216 11:20:25.055428 4797 scope.go:117] "RemoveContainer" containerID="4c3895fc2bb657dcb402912473d4dede9d7ac3c0128601a9720958b207ff6644" Feb 16 11:20:25 crc kubenswrapper[4797]: I0216 11:20:25.055386 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4d5np" Feb 16 11:20:25 crc kubenswrapper[4797]: I0216 11:20:25.116536 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4d5np"] Feb 16 11:20:25 crc kubenswrapper[4797]: I0216 11:20:25.125408 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4d5np"] Feb 16 11:20:25 crc kubenswrapper[4797]: I0216 11:20:25.990300 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61891ace-57b4-446d-afb5-cec9848da89a" path="/var/lib/kubelet/pods/61891ace-57b4-446d-afb5-cec9848da89a/volumes" Feb 16 11:20:26 crc kubenswrapper[4797]: I0216 11:20:26.064689 4797 generic.go:334] "Generic (PLEG): container finished" podID="4637f00b-8997-47b5-8164-e0ee843a75bd" containerID="d4179de36d274283e3ee9487d6284b99d6554a98d0b1c38531d6b5e6407b069f" exitCode=0 Feb 16 11:20:26 crc kubenswrapper[4797]: I0216 11:20:26.064744 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" event={"ID":"4637f00b-8997-47b5-8164-e0ee843a75bd","Type":"ContainerDied","Data":"d4179de36d274283e3ee9487d6284b99d6554a98d0b1c38531d6b5e6407b069f"} Feb 16 11:20:27 crc kubenswrapper[4797]: I0216 11:20:27.074048 4797 generic.go:334] "Generic (PLEG): container finished" podID="4637f00b-8997-47b5-8164-e0ee843a75bd" containerID="91cf85ea8ac35c80a5c6cdb1419fa809312a964cadad39e205e224a77e071e94" exitCode=0 Feb 16 11:20:27 crc kubenswrapper[4797]: I0216 11:20:27.074143 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" event={"ID":"4637f00b-8997-47b5-8164-e0ee843a75bd","Type":"ContainerDied","Data":"91cf85ea8ac35c80a5c6cdb1419fa809312a964cadad39e205e224a77e071e94"} Feb 16 11:20:28 crc kubenswrapper[4797]: I0216 11:20:28.380525 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" Feb 16 11:20:28 crc kubenswrapper[4797]: I0216 11:20:28.524243 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4637f00b-8997-47b5-8164-e0ee843a75bd-util\") pod \"4637f00b-8997-47b5-8164-e0ee843a75bd\" (UID: \"4637f00b-8997-47b5-8164-e0ee843a75bd\") " Feb 16 11:20:28 crc kubenswrapper[4797]: I0216 11:20:28.524751 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgldf\" (UniqueName: \"kubernetes.io/projected/4637f00b-8997-47b5-8164-e0ee843a75bd-kube-api-access-zgldf\") pod \"4637f00b-8997-47b5-8164-e0ee843a75bd\" (UID: \"4637f00b-8997-47b5-8164-e0ee843a75bd\") " Feb 16 11:20:28 crc kubenswrapper[4797]: I0216 11:20:28.525002 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4637f00b-8997-47b5-8164-e0ee843a75bd-bundle\") pod \"4637f00b-8997-47b5-8164-e0ee843a75bd\" (UID: \"4637f00b-8997-47b5-8164-e0ee843a75bd\") " Feb 16 11:20:28 crc kubenswrapper[4797]: I0216 11:20:28.526930 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4637f00b-8997-47b5-8164-e0ee843a75bd-bundle" (OuterVolumeSpecName: "bundle") pod "4637f00b-8997-47b5-8164-e0ee843a75bd" (UID: "4637f00b-8997-47b5-8164-e0ee843a75bd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:20:28 crc kubenswrapper[4797]: I0216 11:20:28.534953 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4637f00b-8997-47b5-8164-e0ee843a75bd-kube-api-access-zgldf" (OuterVolumeSpecName: "kube-api-access-zgldf") pod "4637f00b-8997-47b5-8164-e0ee843a75bd" (UID: "4637f00b-8997-47b5-8164-e0ee843a75bd"). InnerVolumeSpecName "kube-api-access-zgldf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:20:28 crc kubenswrapper[4797]: I0216 11:20:28.549746 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4637f00b-8997-47b5-8164-e0ee843a75bd-util" (OuterVolumeSpecName: "util") pod "4637f00b-8997-47b5-8164-e0ee843a75bd" (UID: "4637f00b-8997-47b5-8164-e0ee843a75bd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:20:28 crc kubenswrapper[4797]: I0216 11:20:28.628950 4797 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4637f00b-8997-47b5-8164-e0ee843a75bd-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:20:28 crc kubenswrapper[4797]: I0216 11:20:28.629005 4797 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4637f00b-8997-47b5-8164-e0ee843a75bd-util\") on node \"crc\" DevicePath \"\"" Feb 16 11:20:28 crc kubenswrapper[4797]: I0216 11:20:28.629016 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgldf\" (UniqueName: \"kubernetes.io/projected/4637f00b-8997-47b5-8164-e0ee843a75bd-kube-api-access-zgldf\") on node \"crc\" DevicePath \"\"" Feb 16 11:20:29 crc kubenswrapper[4797]: I0216 11:20:29.092700 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" event={"ID":"4637f00b-8997-47b5-8164-e0ee843a75bd","Type":"ContainerDied","Data":"0d7fd863d5c7c5290cdd29f54e290d79a3ea747e8999ef6b6cc89bef7e931180"} Feb 16 11:20:29 crc kubenswrapper[4797]: I0216 11:20:29.092761 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d7fd863d5c7c5290cdd29f54e290d79a3ea747e8999ef6b6cc89bef7e931180" Feb 16 11:20:29 crc kubenswrapper[4797]: I0216 11:20:29.092797 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.892960 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt"] Feb 16 11:20:39 crc kubenswrapper[4797]: E0216 11:20:39.893549 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4637f00b-8997-47b5-8164-e0ee843a75bd" containerName="util" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.893560 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="4637f00b-8997-47b5-8164-e0ee843a75bd" containerName="util" Feb 16 11:20:39 crc kubenswrapper[4797]: E0216 11:20:39.893569 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61891ace-57b4-446d-afb5-cec9848da89a" containerName="console" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.893591 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="61891ace-57b4-446d-afb5-cec9848da89a" containerName="console" Feb 16 11:20:39 crc kubenswrapper[4797]: E0216 11:20:39.893599 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4637f00b-8997-47b5-8164-e0ee843a75bd" containerName="extract" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.893605 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="4637f00b-8997-47b5-8164-e0ee843a75bd" containerName="extract" Feb 16 11:20:39 crc kubenswrapper[4797]: E0216 11:20:39.893635 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4637f00b-8997-47b5-8164-e0ee843a75bd" containerName="pull" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.893641 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="4637f00b-8997-47b5-8164-e0ee843a75bd" containerName="pull" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.893742 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="61891ace-57b4-446d-afb5-cec9848da89a" containerName="console" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.893751 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="4637f00b-8997-47b5-8164-e0ee843a75bd" containerName="extract" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.894141 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.896789 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.896815 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-h94tm" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.896849 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.896924 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.897369 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.917085 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt"] Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.978919 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85a3876a-7599-42bc-871c-559ab66a672e-apiservice-cert\") pod \"metallb-operator-controller-manager-7576dc79b7-85mbt\" (UID: \"85a3876a-7599-42bc-871c-559ab66a672e\") " pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.978966 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85a3876a-7599-42bc-871c-559ab66a672e-webhook-cert\") pod \"metallb-operator-controller-manager-7576dc79b7-85mbt\" (UID: \"85a3876a-7599-42bc-871c-559ab66a672e\") " pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" Feb 16 11:20:39 crc kubenswrapper[4797]: I0216 11:20:39.979061 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wshkd\" (UniqueName: \"kubernetes.io/projected/85a3876a-7599-42bc-871c-559ab66a672e-kube-api-access-wshkd\") pod \"metallb-operator-controller-manager-7576dc79b7-85mbt\" (UID: \"85a3876a-7599-42bc-871c-559ab66a672e\") " pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.080166 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wshkd\" (UniqueName: \"kubernetes.io/projected/85a3876a-7599-42bc-871c-559ab66a672e-kube-api-access-wshkd\") pod \"metallb-operator-controller-manager-7576dc79b7-85mbt\" (UID: \"85a3876a-7599-42bc-871c-559ab66a672e\") " pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.080244 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85a3876a-7599-42bc-871c-559ab66a672e-apiservice-cert\") pod \"metallb-operator-controller-manager-7576dc79b7-85mbt\" (UID: \"85a3876a-7599-42bc-871c-559ab66a672e\") " pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.080264 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85a3876a-7599-42bc-871c-559ab66a672e-webhook-cert\") pod \"metallb-operator-controller-manager-7576dc79b7-85mbt\" (UID: \"85a3876a-7599-42bc-871c-559ab66a672e\") " pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.086339 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85a3876a-7599-42bc-871c-559ab66a672e-webhook-cert\") pod \"metallb-operator-controller-manager-7576dc79b7-85mbt\" (UID: \"85a3876a-7599-42bc-871c-559ab66a672e\") " pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.088232 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85a3876a-7599-42bc-871c-559ab66a672e-apiservice-cert\") pod \"metallb-operator-controller-manager-7576dc79b7-85mbt\" (UID: \"85a3876a-7599-42bc-871c-559ab66a672e\") " pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.107336 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wshkd\" (UniqueName: \"kubernetes.io/projected/85a3876a-7599-42bc-871c-559ab66a672e-kube-api-access-wshkd\") pod \"metallb-operator-controller-manager-7576dc79b7-85mbt\" (UID: \"85a3876a-7599-42bc-871c-559ab66a672e\") " pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.208369 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.239053 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-965c86b89-tdccp"] Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.239960 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.242372 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-68xbt" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.242663 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.242804 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.282943 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0808430f-3807-401c-8e89-be026c69be52-apiservice-cert\") pod \"metallb-operator-webhook-server-965c86b89-tdccp\" (UID: \"0808430f-3807-401c-8e89-be026c69be52\") " pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.283003 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0808430f-3807-401c-8e89-be026c69be52-webhook-cert\") pod \"metallb-operator-webhook-server-965c86b89-tdccp\" (UID: \"0808430f-3807-401c-8e89-be026c69be52\") " pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.283053 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4hx4\" (UniqueName: \"kubernetes.io/projected/0808430f-3807-401c-8e89-be026c69be52-kube-api-access-s4hx4\") pod \"metallb-operator-webhook-server-965c86b89-tdccp\" (UID: \"0808430f-3807-401c-8e89-be026c69be52\") " pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.305680 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-965c86b89-tdccp"] Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.384159 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0808430f-3807-401c-8e89-be026c69be52-apiservice-cert\") pod \"metallb-operator-webhook-server-965c86b89-tdccp\" (UID: \"0808430f-3807-401c-8e89-be026c69be52\") " pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.384733 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0808430f-3807-401c-8e89-be026c69be52-webhook-cert\") pod \"metallb-operator-webhook-server-965c86b89-tdccp\" (UID: \"0808430f-3807-401c-8e89-be026c69be52\") " pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.384789 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4hx4\" (UniqueName: \"kubernetes.io/projected/0808430f-3807-401c-8e89-be026c69be52-kube-api-access-s4hx4\") pod \"metallb-operator-webhook-server-965c86b89-tdccp\" (UID: \"0808430f-3807-401c-8e89-be026c69be52\") " pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.390959 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0808430f-3807-401c-8e89-be026c69be52-webhook-cert\") pod \"metallb-operator-webhook-server-965c86b89-tdccp\" (UID: \"0808430f-3807-401c-8e89-be026c69be52\") " pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.406257 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0808430f-3807-401c-8e89-be026c69be52-apiservice-cert\") pod \"metallb-operator-webhook-server-965c86b89-tdccp\" (UID: \"0808430f-3807-401c-8e89-be026c69be52\") " pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.408370 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4hx4\" (UniqueName: \"kubernetes.io/projected/0808430f-3807-401c-8e89-be026c69be52-kube-api-access-s4hx4\") pod \"metallb-operator-webhook-server-965c86b89-tdccp\" (UID: \"0808430f-3807-401c-8e89-be026c69be52\") " pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.574801 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.659811 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt"] Feb 16 11:20:40 crc kubenswrapper[4797]: W0216 11:20:40.667545 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85a3876a_7599_42bc_871c_559ab66a672e.slice/crio-7947274bd4bdc632533384cf67b5e6ef0d5daf03d41fc305b1781259d3640116 WatchSource:0}: Error finding container 7947274bd4bdc632533384cf67b5e6ef0d5daf03d41fc305b1781259d3640116: Status 404 returned error can't find the container with id 7947274bd4bdc632533384cf67b5e6ef0d5daf03d41fc305b1781259d3640116 Feb 16 11:20:40 crc kubenswrapper[4797]: I0216 11:20:40.793025 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-965c86b89-tdccp"] Feb 16 11:20:40 crc kubenswrapper[4797]: W0216 11:20:40.794998 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0808430f_3807_401c_8e89_be026c69be52.slice/crio-d3c3ed06a0aac4d256be2297da7d639eea377e0e98ce8cba148b5e64b96ed506 WatchSource:0}: Error finding container d3c3ed06a0aac4d256be2297da7d639eea377e0e98ce8cba148b5e64b96ed506: Status 404 returned error can't find the container with id d3c3ed06a0aac4d256be2297da7d639eea377e0e98ce8cba148b5e64b96ed506 Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.162003 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" event={"ID":"0808430f-3807-401c-8e89-be026c69be52","Type":"ContainerStarted","Data":"d3c3ed06a0aac4d256be2297da7d639eea377e0e98ce8cba148b5e64b96ed506"} Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.165317 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" event={"ID":"85a3876a-7599-42bc-871c-559ab66a672e","Type":"ContainerStarted","Data":"7947274bd4bdc632533384cf67b5e6ef0d5daf03d41fc305b1781259d3640116"} Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.431925 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-795gm"] Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.434304 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.440271 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-795gm"] Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.499474 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bee3530-9188-4108-b414-83e85ef543a6-utilities\") pod \"community-operators-795gm\" (UID: \"5bee3530-9188-4108-b414-83e85ef543a6\") " pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.499792 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86xk6\" (UniqueName: \"kubernetes.io/projected/5bee3530-9188-4108-b414-83e85ef543a6-kube-api-access-86xk6\") pod \"community-operators-795gm\" (UID: \"5bee3530-9188-4108-b414-83e85ef543a6\") " pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.499972 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bee3530-9188-4108-b414-83e85ef543a6-catalog-content\") pod \"community-operators-795gm\" (UID: \"5bee3530-9188-4108-b414-83e85ef543a6\") " pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.601471 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bee3530-9188-4108-b414-83e85ef543a6-utilities\") pod \"community-operators-795gm\" (UID: \"5bee3530-9188-4108-b414-83e85ef543a6\") " pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.601530 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86xk6\" (UniqueName: \"kubernetes.io/projected/5bee3530-9188-4108-b414-83e85ef543a6-kube-api-access-86xk6\") pod \"community-operators-795gm\" (UID: \"5bee3530-9188-4108-b414-83e85ef543a6\") " pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.601562 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bee3530-9188-4108-b414-83e85ef543a6-catalog-content\") pod \"community-operators-795gm\" (UID: \"5bee3530-9188-4108-b414-83e85ef543a6\") " pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.602113 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bee3530-9188-4108-b414-83e85ef543a6-catalog-content\") pod \"community-operators-795gm\" (UID: \"5bee3530-9188-4108-b414-83e85ef543a6\") " pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.602840 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bee3530-9188-4108-b414-83e85ef543a6-utilities\") pod \"community-operators-795gm\" (UID: \"5bee3530-9188-4108-b414-83e85ef543a6\") " pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.627654 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86xk6\" (UniqueName: \"kubernetes.io/projected/5bee3530-9188-4108-b414-83e85ef543a6-kube-api-access-86xk6\") pod \"community-operators-795gm\" (UID: \"5bee3530-9188-4108-b414-83e85ef543a6\") " pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:41 crc kubenswrapper[4797]: I0216 11:20:41.801427 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:42 crc kubenswrapper[4797]: I0216 11:20:42.269346 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-795gm"] Feb 16 11:20:42 crc kubenswrapper[4797]: W0216 11:20:42.291698 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bee3530_9188_4108_b414_83e85ef543a6.slice/crio-07b3056daa863276d5de123036c61a04396f59a1ba27845c2c50d0c9403d4011 WatchSource:0}: Error finding container 07b3056daa863276d5de123036c61a04396f59a1ba27845c2c50d0c9403d4011: Status 404 returned error can't find the container with id 07b3056daa863276d5de123036c61a04396f59a1ba27845c2c50d0c9403d4011 Feb 16 11:20:43 crc kubenswrapper[4797]: I0216 11:20:43.181098 4797 generic.go:334] "Generic (PLEG): container finished" podID="5bee3530-9188-4108-b414-83e85ef543a6" containerID="3d5da96b4b8be25f449bd8b663995cc099bc1bb90f1dde5b5aef537dcac7cb6a" exitCode=0 Feb 16 11:20:43 crc kubenswrapper[4797]: I0216 11:20:43.181520 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-795gm" event={"ID":"5bee3530-9188-4108-b414-83e85ef543a6","Type":"ContainerDied","Data":"3d5da96b4b8be25f449bd8b663995cc099bc1bb90f1dde5b5aef537dcac7cb6a"} Feb 16 11:20:43 crc kubenswrapper[4797]: I0216 11:20:43.181617 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-795gm" event={"ID":"5bee3530-9188-4108-b414-83e85ef543a6","Type":"ContainerStarted","Data":"07b3056daa863276d5de123036c61a04396f59a1ba27845c2c50d0c9403d4011"} Feb 16 11:20:46 crc kubenswrapper[4797]: I0216 11:20:46.200684 4797 generic.go:334] "Generic (PLEG): container finished" podID="5bee3530-9188-4108-b414-83e85ef543a6" containerID="14376fa7a5de0d1f4ce08ec77b8817a5b26564078a5b32f05bae702c0b80e3f8" exitCode=0 Feb 16 11:20:46 crc kubenswrapper[4797]: I0216 11:20:46.200772 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-795gm" event={"ID":"5bee3530-9188-4108-b414-83e85ef543a6","Type":"ContainerDied","Data":"14376fa7a5de0d1f4ce08ec77b8817a5b26564078a5b32f05bae702c0b80e3f8"} Feb 16 11:20:46 crc kubenswrapper[4797]: I0216 11:20:46.203432 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" event={"ID":"0808430f-3807-401c-8e89-be026c69be52","Type":"ContainerStarted","Data":"cac7d7b08747b3794bf607746964d1516d2d8a8f117764a58d70c4a53b777d69"} Feb 16 11:20:46 crc kubenswrapper[4797]: I0216 11:20:46.205156 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" event={"ID":"85a3876a-7599-42bc-871c-559ab66a672e","Type":"ContainerStarted","Data":"89410947fef582affa566557591bc8d20b450ed2646a7885f273c44f44f0eb09"} Feb 16 11:20:46 crc kubenswrapper[4797]: I0216 11:20:46.205293 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" Feb 16 11:20:46 crc kubenswrapper[4797]: I0216 11:20:46.274779 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" podStartSLOduration=2.342434901 podStartE2EDuration="7.274754004s" podCreationTimestamp="2026-02-16 11:20:39 +0000 UTC" firstStartedPulling="2026-02-16 11:20:40.675547075 +0000 UTC m=+835.395732055" lastFinishedPulling="2026-02-16 11:20:45.607866178 +0000 UTC m=+840.328051158" observedRunningTime="2026-02-16 11:20:46.252862659 +0000 UTC m=+840.973047649" watchObservedRunningTime="2026-02-16 11:20:46.274754004 +0000 UTC m=+840.994938994" Feb 16 11:20:46 crc kubenswrapper[4797]: I0216 11:20:46.278973 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" podStartSLOduration=1.445677633 podStartE2EDuration="6.278953157s" podCreationTimestamp="2026-02-16 11:20:40 +0000 UTC" firstStartedPulling="2026-02-16 11:20:40.798316478 +0000 UTC m=+835.518501458" lastFinishedPulling="2026-02-16 11:20:45.631592002 +0000 UTC m=+840.351776982" observedRunningTime="2026-02-16 11:20:46.274463276 +0000 UTC m=+840.994648256" watchObservedRunningTime="2026-02-16 11:20:46.278953157 +0000 UTC m=+840.999138137" Feb 16 11:20:47 crc kubenswrapper[4797]: I0216 11:20:47.213394 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-795gm" event={"ID":"5bee3530-9188-4108-b414-83e85ef543a6","Type":"ContainerStarted","Data":"238943032ee699ccf428d6d95a241e5ed90f091ea260c8f37134aeb3885ceb47"} Feb 16 11:20:47 crc kubenswrapper[4797]: I0216 11:20:47.213755 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" Feb 16 11:20:47 crc kubenswrapper[4797]: I0216 11:20:47.231836 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-795gm" podStartSLOduration=3.726212602 podStartE2EDuration="6.231816159s" podCreationTimestamp="2026-02-16 11:20:41 +0000 UTC" firstStartedPulling="2026-02-16 11:20:44.100614276 +0000 UTC m=+838.820799256" lastFinishedPulling="2026-02-16 11:20:46.606217823 +0000 UTC m=+841.326402813" observedRunningTime="2026-02-16 11:20:47.230390669 +0000 UTC m=+841.950575649" watchObservedRunningTime="2026-02-16 11:20:47.231816159 +0000 UTC m=+841.952001139" Feb 16 11:20:51 crc kubenswrapper[4797]: I0216 11:20:51.802570 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:51 crc kubenswrapper[4797]: I0216 11:20:51.803099 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:51 crc kubenswrapper[4797]: I0216 11:20:51.864791 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:52 crc kubenswrapper[4797]: I0216 11:20:52.295417 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.283745 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-795gm"] Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.283952 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-795gm" podUID="5bee3530-9188-4108-b414-83e85ef543a6" containerName="registry-server" containerID="cri-o://238943032ee699ccf428d6d95a241e5ed90f091ea260c8f37134aeb3885ceb47" gracePeriod=2 Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.695727 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kf4dx"] Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.696932 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.701310 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.711565 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kf4dx"] Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.799070 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86xk6\" (UniqueName: \"kubernetes.io/projected/5bee3530-9188-4108-b414-83e85ef543a6-kube-api-access-86xk6\") pod \"5bee3530-9188-4108-b414-83e85ef543a6\" (UID: \"5bee3530-9188-4108-b414-83e85ef543a6\") " Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.799483 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bee3530-9188-4108-b414-83e85ef543a6-catalog-content\") pod \"5bee3530-9188-4108-b414-83e85ef543a6\" (UID: \"5bee3530-9188-4108-b414-83e85ef543a6\") " Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.799576 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bee3530-9188-4108-b414-83e85ef543a6-utilities\") pod \"5bee3530-9188-4108-b414-83e85ef543a6\" (UID: \"5bee3530-9188-4108-b414-83e85ef543a6\") " Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.799781 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-catalog-content\") pod \"redhat-marketplace-kf4dx\" (UID: \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\") " pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.799836 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6zkp\" (UniqueName: \"kubernetes.io/projected/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-kube-api-access-z6zkp\") pod \"redhat-marketplace-kf4dx\" (UID: \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\") " pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.799942 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-utilities\") pod \"redhat-marketplace-kf4dx\" (UID: \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\") " pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.800552 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bee3530-9188-4108-b414-83e85ef543a6-utilities" (OuterVolumeSpecName: "utilities") pod "5bee3530-9188-4108-b414-83e85ef543a6" (UID: "5bee3530-9188-4108-b414-83e85ef543a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.812487 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bee3530-9188-4108-b414-83e85ef543a6-kube-api-access-86xk6" (OuterVolumeSpecName: "kube-api-access-86xk6") pod "5bee3530-9188-4108-b414-83e85ef543a6" (UID: "5bee3530-9188-4108-b414-83e85ef543a6"). InnerVolumeSpecName "kube-api-access-86xk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.872756 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5bee3530-9188-4108-b414-83e85ef543a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5bee3530-9188-4108-b414-83e85ef543a6" (UID: "5bee3530-9188-4108-b414-83e85ef543a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.900604 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-utilities\") pod \"redhat-marketplace-kf4dx\" (UID: \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\") " pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.900662 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-catalog-content\") pod \"redhat-marketplace-kf4dx\" (UID: \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\") " pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.900698 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6zkp\" (UniqueName: \"kubernetes.io/projected/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-kube-api-access-z6zkp\") pod \"redhat-marketplace-kf4dx\" (UID: \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\") " pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.900756 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5bee3530-9188-4108-b414-83e85ef543a6-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.900794 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86xk6\" (UniqueName: \"kubernetes.io/projected/5bee3530-9188-4108-b414-83e85ef543a6-kube-api-access-86xk6\") on node \"crc\" DevicePath \"\"" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.900912 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5bee3530-9188-4108-b414-83e85ef543a6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.901199 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-utilities\") pod \"redhat-marketplace-kf4dx\" (UID: \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\") " pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.901260 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-catalog-content\") pod \"redhat-marketplace-kf4dx\" (UID: \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\") " pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:20:55 crc kubenswrapper[4797]: I0216 11:20:55.919865 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6zkp\" (UniqueName: \"kubernetes.io/projected/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-kube-api-access-z6zkp\") pod \"redhat-marketplace-kf4dx\" (UID: \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\") " pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.014915 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.241807 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kf4dx"] Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.279614 4797 generic.go:334] "Generic (PLEG): container finished" podID="5bee3530-9188-4108-b414-83e85ef543a6" containerID="238943032ee699ccf428d6d95a241e5ed90f091ea260c8f37134aeb3885ceb47" exitCode=0 Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.279966 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-795gm" event={"ID":"5bee3530-9188-4108-b414-83e85ef543a6","Type":"ContainerDied","Data":"238943032ee699ccf428d6d95a241e5ed90f091ea260c8f37134aeb3885ceb47"} Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.280000 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-795gm" event={"ID":"5bee3530-9188-4108-b414-83e85ef543a6","Type":"ContainerDied","Data":"07b3056daa863276d5de123036c61a04396f59a1ba27845c2c50d0c9403d4011"} Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.280023 4797 scope.go:117] "RemoveContainer" containerID="238943032ee699ccf428d6d95a241e5ed90f091ea260c8f37134aeb3885ceb47" Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.280167 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-795gm" Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.287934 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kf4dx" event={"ID":"883d8896-8cd4-45e5-8e74-d0be8e1bdb64","Type":"ContainerStarted","Data":"bf81a8806eea8f892a38da9457fce82c32eb9634b2c70e04d49406dc93535941"} Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.305696 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-795gm"] Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.309884 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-795gm"] Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.310968 4797 scope.go:117] "RemoveContainer" containerID="14376fa7a5de0d1f4ce08ec77b8817a5b26564078a5b32f05bae702c0b80e3f8" Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.341866 4797 scope.go:117] "RemoveContainer" containerID="3d5da96b4b8be25f449bd8b663995cc099bc1bb90f1dde5b5aef537dcac7cb6a" Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.361067 4797 scope.go:117] "RemoveContainer" containerID="238943032ee699ccf428d6d95a241e5ed90f091ea260c8f37134aeb3885ceb47" Feb 16 11:20:56 crc kubenswrapper[4797]: E0216 11:20:56.361615 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"238943032ee699ccf428d6d95a241e5ed90f091ea260c8f37134aeb3885ceb47\": container with ID starting with 238943032ee699ccf428d6d95a241e5ed90f091ea260c8f37134aeb3885ceb47 not found: ID does not exist" containerID="238943032ee699ccf428d6d95a241e5ed90f091ea260c8f37134aeb3885ceb47" Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.361648 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"238943032ee699ccf428d6d95a241e5ed90f091ea260c8f37134aeb3885ceb47"} err="failed to get container status \"238943032ee699ccf428d6d95a241e5ed90f091ea260c8f37134aeb3885ceb47\": rpc error: code = NotFound desc = could not find container \"238943032ee699ccf428d6d95a241e5ed90f091ea260c8f37134aeb3885ceb47\": container with ID starting with 238943032ee699ccf428d6d95a241e5ed90f091ea260c8f37134aeb3885ceb47 not found: ID does not exist" Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.361672 4797 scope.go:117] "RemoveContainer" containerID="14376fa7a5de0d1f4ce08ec77b8817a5b26564078a5b32f05bae702c0b80e3f8" Feb 16 11:20:56 crc kubenswrapper[4797]: E0216 11:20:56.361917 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14376fa7a5de0d1f4ce08ec77b8817a5b26564078a5b32f05bae702c0b80e3f8\": container with ID starting with 14376fa7a5de0d1f4ce08ec77b8817a5b26564078a5b32f05bae702c0b80e3f8 not found: ID does not exist" containerID="14376fa7a5de0d1f4ce08ec77b8817a5b26564078a5b32f05bae702c0b80e3f8" Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.361940 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14376fa7a5de0d1f4ce08ec77b8817a5b26564078a5b32f05bae702c0b80e3f8"} err="failed to get container status \"14376fa7a5de0d1f4ce08ec77b8817a5b26564078a5b32f05bae702c0b80e3f8\": rpc error: code = NotFound desc = could not find container \"14376fa7a5de0d1f4ce08ec77b8817a5b26564078a5b32f05bae702c0b80e3f8\": container with ID starting with 14376fa7a5de0d1f4ce08ec77b8817a5b26564078a5b32f05bae702c0b80e3f8 not found: ID does not exist" Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.361954 4797 scope.go:117] "RemoveContainer" containerID="3d5da96b4b8be25f449bd8b663995cc099bc1bb90f1dde5b5aef537dcac7cb6a" Feb 16 11:20:56 crc kubenswrapper[4797]: E0216 11:20:56.362165 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d5da96b4b8be25f449bd8b663995cc099bc1bb90f1dde5b5aef537dcac7cb6a\": container with ID starting with 3d5da96b4b8be25f449bd8b663995cc099bc1bb90f1dde5b5aef537dcac7cb6a not found: ID does not exist" containerID="3d5da96b4b8be25f449bd8b663995cc099bc1bb90f1dde5b5aef537dcac7cb6a" Feb 16 11:20:56 crc kubenswrapper[4797]: I0216 11:20:56.362214 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5da96b4b8be25f449bd8b663995cc099bc1bb90f1dde5b5aef537dcac7cb6a"} err="failed to get container status \"3d5da96b4b8be25f449bd8b663995cc099bc1bb90f1dde5b5aef537dcac7cb6a\": rpc error: code = NotFound desc = could not find container \"3d5da96b4b8be25f449bd8b663995cc099bc1bb90f1dde5b5aef537dcac7cb6a\": container with ID starting with 3d5da96b4b8be25f449bd8b663995cc099bc1bb90f1dde5b5aef537dcac7cb6a not found: ID does not exist" Feb 16 11:20:56 crc kubenswrapper[4797]: E0216 11:20:56.494934 4797 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod883d8896_8cd4_45e5_8e74_d0be8e1bdb64.slice/crio-conmon-3ca33d50f842da58098a4c4a506d9a4e94185de2be0a2e8af0988d57f1d425ca.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod883d8896_8cd4_45e5_8e74_d0be8e1bdb64.slice/crio-3ca33d50f842da58098a4c4a506d9a4e94185de2be0a2e8af0988d57f1d425ca.scope\": RecentStats: unable to find data in memory cache]" Feb 16 11:20:57 crc kubenswrapper[4797]: I0216 11:20:57.295979 4797 generic.go:334] "Generic (PLEG): container finished" podID="883d8896-8cd4-45e5-8e74-d0be8e1bdb64" containerID="3ca33d50f842da58098a4c4a506d9a4e94185de2be0a2e8af0988d57f1d425ca" exitCode=0 Feb 16 11:20:57 crc kubenswrapper[4797]: I0216 11:20:57.296073 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kf4dx" event={"ID":"883d8896-8cd4-45e5-8e74-d0be8e1bdb64","Type":"ContainerDied","Data":"3ca33d50f842da58098a4c4a506d9a4e94185de2be0a2e8af0988d57f1d425ca"} Feb 16 11:20:57 crc kubenswrapper[4797]: I0216 11:20:57.998602 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bee3530-9188-4108-b414-83e85ef543a6" path="/var/lib/kubelet/pods/5bee3530-9188-4108-b414-83e85ef543a6/volumes" Feb 16 11:20:58 crc kubenswrapper[4797]: I0216 11:20:58.304307 4797 generic.go:334] "Generic (PLEG): container finished" podID="883d8896-8cd4-45e5-8e74-d0be8e1bdb64" containerID="fc82a2980064860d2cb3cd076af036bd3f3df40b1d73f3367b346c9ccdb6e48c" exitCode=0 Feb 16 11:20:58 crc kubenswrapper[4797]: I0216 11:20:58.304668 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kf4dx" event={"ID":"883d8896-8cd4-45e5-8e74-d0be8e1bdb64","Type":"ContainerDied","Data":"fc82a2980064860d2cb3cd076af036bd3f3df40b1d73f3367b346c9ccdb6e48c"} Feb 16 11:21:00 crc kubenswrapper[4797]: I0216 11:21:00.318850 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kf4dx" event={"ID":"883d8896-8cd4-45e5-8e74-d0be8e1bdb64","Type":"ContainerStarted","Data":"5ea23be1911fa2a2f616a1e34254d3023287e99ad4715c4744b3183ae78678e4"} Feb 16 11:21:00 crc kubenswrapper[4797]: I0216 11:21:00.341146 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kf4dx" podStartSLOduration=3.9107691449999997 podStartE2EDuration="5.341128236s" podCreationTimestamp="2026-02-16 11:20:55 +0000 UTC" firstStartedPulling="2026-02-16 11:20:57.298601035 +0000 UTC m=+852.018786015" lastFinishedPulling="2026-02-16 11:20:58.728960126 +0000 UTC m=+853.449145106" observedRunningTime="2026-02-16 11:21:00.335701788 +0000 UTC m=+855.055886788" watchObservedRunningTime="2026-02-16 11:21:00.341128236 +0000 UTC m=+855.061313216" Feb 16 11:21:00 crc kubenswrapper[4797]: I0216 11:21:00.589072 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-965c86b89-tdccp" Feb 16 11:21:06 crc kubenswrapper[4797]: I0216 11:21:06.015957 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:21:06 crc kubenswrapper[4797]: I0216 11:21:06.016254 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:21:06 crc kubenswrapper[4797]: I0216 11:21:06.061655 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:21:06 crc kubenswrapper[4797]: I0216 11:21:06.420655 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:21:08 crc kubenswrapper[4797]: I0216 11:21:08.488191 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kf4dx"] Feb 16 11:21:08 crc kubenswrapper[4797]: I0216 11:21:08.488866 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kf4dx" podUID="883d8896-8cd4-45e5-8e74-d0be8e1bdb64" containerName="registry-server" containerID="cri-o://5ea23be1911fa2a2f616a1e34254d3023287e99ad4715c4744b3183ae78678e4" gracePeriod=2 Feb 16 11:21:08 crc kubenswrapper[4797]: I0216 11:21:08.915227 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.062606 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-catalog-content\") pod \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\" (UID: \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\") " Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.062660 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6zkp\" (UniqueName: \"kubernetes.io/projected/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-kube-api-access-z6zkp\") pod \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\" (UID: \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\") " Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.062706 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-utilities\") pod \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\" (UID: \"883d8896-8cd4-45e5-8e74-d0be8e1bdb64\") " Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.063717 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-utilities" (OuterVolumeSpecName: "utilities") pod "883d8896-8cd4-45e5-8e74-d0be8e1bdb64" (UID: "883d8896-8cd4-45e5-8e74-d0be8e1bdb64"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.071230 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-kube-api-access-z6zkp" (OuterVolumeSpecName: "kube-api-access-z6zkp") pod "883d8896-8cd4-45e5-8e74-d0be8e1bdb64" (UID: "883d8896-8cd4-45e5-8e74-d0be8e1bdb64"). InnerVolumeSpecName "kube-api-access-z6zkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.095204 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "883d8896-8cd4-45e5-8e74-d0be8e1bdb64" (UID: "883d8896-8cd4-45e5-8e74-d0be8e1bdb64"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.164225 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.164265 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6zkp\" (UniqueName: \"kubernetes.io/projected/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-kube-api-access-z6zkp\") on node \"crc\" DevicePath \"\"" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.164279 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883d8896-8cd4-45e5-8e74-d0be8e1bdb64-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.380268 4797 generic.go:334] "Generic (PLEG): container finished" podID="883d8896-8cd4-45e5-8e74-d0be8e1bdb64" containerID="5ea23be1911fa2a2f616a1e34254d3023287e99ad4715c4744b3183ae78678e4" exitCode=0 Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.380322 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kf4dx" event={"ID":"883d8896-8cd4-45e5-8e74-d0be8e1bdb64","Type":"ContainerDied","Data":"5ea23be1911fa2a2f616a1e34254d3023287e99ad4715c4744b3183ae78678e4"} Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.380355 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kf4dx" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.380371 4797 scope.go:117] "RemoveContainer" containerID="5ea23be1911fa2a2f616a1e34254d3023287e99ad4715c4744b3183ae78678e4" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.380357 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kf4dx" event={"ID":"883d8896-8cd4-45e5-8e74-d0be8e1bdb64","Type":"ContainerDied","Data":"bf81a8806eea8f892a38da9457fce82c32eb9634b2c70e04d49406dc93535941"} Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.404010 4797 scope.go:117] "RemoveContainer" containerID="fc82a2980064860d2cb3cd076af036bd3f3df40b1d73f3367b346c9ccdb6e48c" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.423143 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kf4dx"] Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.425352 4797 scope.go:117] "RemoveContainer" containerID="3ca33d50f842da58098a4c4a506d9a4e94185de2be0a2e8af0988d57f1d425ca" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.429086 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kf4dx"] Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.451601 4797 scope.go:117] "RemoveContainer" containerID="5ea23be1911fa2a2f616a1e34254d3023287e99ad4715c4744b3183ae78678e4" Feb 16 11:21:09 crc kubenswrapper[4797]: E0216 11:21:09.452213 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ea23be1911fa2a2f616a1e34254d3023287e99ad4715c4744b3183ae78678e4\": container with ID starting with 5ea23be1911fa2a2f616a1e34254d3023287e99ad4715c4744b3183ae78678e4 not found: ID does not exist" containerID="5ea23be1911fa2a2f616a1e34254d3023287e99ad4715c4744b3183ae78678e4" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.452281 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ea23be1911fa2a2f616a1e34254d3023287e99ad4715c4744b3183ae78678e4"} err="failed to get container status \"5ea23be1911fa2a2f616a1e34254d3023287e99ad4715c4744b3183ae78678e4\": rpc error: code = NotFound desc = could not find container \"5ea23be1911fa2a2f616a1e34254d3023287e99ad4715c4744b3183ae78678e4\": container with ID starting with 5ea23be1911fa2a2f616a1e34254d3023287e99ad4715c4744b3183ae78678e4 not found: ID does not exist" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.452317 4797 scope.go:117] "RemoveContainer" containerID="fc82a2980064860d2cb3cd076af036bd3f3df40b1d73f3367b346c9ccdb6e48c" Feb 16 11:21:09 crc kubenswrapper[4797]: E0216 11:21:09.452748 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc82a2980064860d2cb3cd076af036bd3f3df40b1d73f3367b346c9ccdb6e48c\": container with ID starting with fc82a2980064860d2cb3cd076af036bd3f3df40b1d73f3367b346c9ccdb6e48c not found: ID does not exist" containerID="fc82a2980064860d2cb3cd076af036bd3f3df40b1d73f3367b346c9ccdb6e48c" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.452845 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc82a2980064860d2cb3cd076af036bd3f3df40b1d73f3367b346c9ccdb6e48c"} err="failed to get container status \"fc82a2980064860d2cb3cd076af036bd3f3df40b1d73f3367b346c9ccdb6e48c\": rpc error: code = NotFound desc = could not find container \"fc82a2980064860d2cb3cd076af036bd3f3df40b1d73f3367b346c9ccdb6e48c\": container with ID starting with fc82a2980064860d2cb3cd076af036bd3f3df40b1d73f3367b346c9ccdb6e48c not found: ID does not exist" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.452926 4797 scope.go:117] "RemoveContainer" containerID="3ca33d50f842da58098a4c4a506d9a4e94185de2be0a2e8af0988d57f1d425ca" Feb 16 11:21:09 crc kubenswrapper[4797]: E0216 11:21:09.453403 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ca33d50f842da58098a4c4a506d9a4e94185de2be0a2e8af0988d57f1d425ca\": container with ID starting with 3ca33d50f842da58098a4c4a506d9a4e94185de2be0a2e8af0988d57f1d425ca not found: ID does not exist" containerID="3ca33d50f842da58098a4c4a506d9a4e94185de2be0a2e8af0988d57f1d425ca" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.453481 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ca33d50f842da58098a4c4a506d9a4e94185de2be0a2e8af0988d57f1d425ca"} err="failed to get container status \"3ca33d50f842da58098a4c4a506d9a4e94185de2be0a2e8af0988d57f1d425ca\": rpc error: code = NotFound desc = could not find container \"3ca33d50f842da58098a4c4a506d9a4e94185de2be0a2e8af0988d57f1d425ca\": container with ID starting with 3ca33d50f842da58098a4c4a506d9a4e94185de2be0a2e8af0988d57f1d425ca not found: ID does not exist" Feb 16 11:21:09 crc kubenswrapper[4797]: I0216 11:21:09.991213 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883d8896-8cd4-45e5-8e74-d0be8e1bdb64" path="/var/lib/kubelet/pods/883d8896-8cd4-45e5-8e74-d0be8e1bdb64/volumes" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.211954 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7576dc79b7-85mbt" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.985438 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-57ng7"] Feb 16 11:21:20 crc kubenswrapper[4797]: E0216 11:21:20.985708 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883d8896-8cd4-45e5-8e74-d0be8e1bdb64" containerName="registry-server" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.985721 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="883d8896-8cd4-45e5-8e74-d0be8e1bdb64" containerName="registry-server" Feb 16 11:21:20 crc kubenswrapper[4797]: E0216 11:21:20.985738 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883d8896-8cd4-45e5-8e74-d0be8e1bdb64" containerName="extract-content" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.985744 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="883d8896-8cd4-45e5-8e74-d0be8e1bdb64" containerName="extract-content" Feb 16 11:21:20 crc kubenswrapper[4797]: E0216 11:21:20.985752 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883d8896-8cd4-45e5-8e74-d0be8e1bdb64" containerName="extract-utilities" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.985758 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="883d8896-8cd4-45e5-8e74-d0be8e1bdb64" containerName="extract-utilities" Feb 16 11:21:20 crc kubenswrapper[4797]: E0216 11:21:20.985768 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bee3530-9188-4108-b414-83e85ef543a6" containerName="registry-server" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.985774 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bee3530-9188-4108-b414-83e85ef543a6" containerName="registry-server" Feb 16 11:21:20 crc kubenswrapper[4797]: E0216 11:21:20.985781 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bee3530-9188-4108-b414-83e85ef543a6" containerName="extract-content" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.985786 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bee3530-9188-4108-b414-83e85ef543a6" containerName="extract-content" Feb 16 11:21:20 crc kubenswrapper[4797]: E0216 11:21:20.985796 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bee3530-9188-4108-b414-83e85ef543a6" containerName="extract-utilities" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.985802 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bee3530-9188-4108-b414-83e85ef543a6" containerName="extract-utilities" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.986107 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="883d8896-8cd4-45e5-8e74-d0be8e1bdb64" containerName="registry-server" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.986121 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bee3530-9188-4108-b414-83e85ef543a6" containerName="registry-server" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.988000 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.989692 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.989944 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.990131 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-dd8fc" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.994659 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v"] Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.995409 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" Feb 16 11:21:20 crc kubenswrapper[4797]: I0216 11:21:20.997086 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.011137 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v"] Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.021713 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-reloader\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.021783 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-frr-conf\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.021816 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a890db88-edf0-48b0-82e7-f83d8d762493-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-fl28v\" (UID: \"a890db88-edf0-48b0-82e7-f83d8d762493\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.021837 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-metrics-certs\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.021875 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-frr-startup\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.021895 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-frr-sockets\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.021937 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt59v\" (UniqueName: \"kubernetes.io/projected/a890db88-edf0-48b0-82e7-f83d8d762493-kube-api-access-dt59v\") pod \"frr-k8s-webhook-server-78b44bf5bb-fl28v\" (UID: \"a890db88-edf0-48b0-82e7-f83d8d762493\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.021979 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn2bl\" (UniqueName: \"kubernetes.io/projected/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-kube-api-access-mn2bl\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.022003 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-metrics\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.076382 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-q95br"] Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.077539 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-q95br" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.079536 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.082626 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.087672 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-99x9f"] Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.088606 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-99x9f" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.091383 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.091462 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-m5cq6" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.092179 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.108307 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-99x9f"] Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.123253 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn2bl\" (UniqueName: \"kubernetes.io/projected/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-kube-api-access-mn2bl\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.123302 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-metrics\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.123343 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-reloader\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.123375 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-frr-conf\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.123404 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a890db88-edf0-48b0-82e7-f83d8d762493-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-fl28v\" (UID: \"a890db88-edf0-48b0-82e7-f83d8d762493\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.123422 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-metrics-certs\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.123453 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-frr-startup\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.123470 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-frr-sockets\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.123497 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt59v\" (UniqueName: \"kubernetes.io/projected/a890db88-edf0-48b0-82e7-f83d8d762493-kube-api-access-dt59v\") pod \"frr-k8s-webhook-server-78b44bf5bb-fl28v\" (UID: \"a890db88-edf0-48b0-82e7-f83d8d762493\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.124595 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-metrics\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.124777 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-reloader\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.124977 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-frr-conf\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: E0216 11:21:21.125039 4797 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Feb 16 11:21:21 crc kubenswrapper[4797]: E0216 11:21:21.125077 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a890db88-edf0-48b0-82e7-f83d8d762493-cert podName:a890db88-edf0-48b0-82e7-f83d8d762493 nodeName:}" failed. No retries permitted until 2026-02-16 11:21:21.625064109 +0000 UTC m=+876.345249079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a890db88-edf0-48b0-82e7-f83d8d762493-cert") pod "frr-k8s-webhook-server-78b44bf5bb-fl28v" (UID: "a890db88-edf0-48b0-82e7-f83d8d762493") : secret "frr-k8s-webhook-server-cert" not found Feb 16 11:21:21 crc kubenswrapper[4797]: E0216 11:21:21.125257 4797 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 16 11:21:21 crc kubenswrapper[4797]: E0216 11:21:21.125285 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-metrics-certs podName:ae064aa9-f20f-4271-80aa-4df1aa1ecd35 nodeName:}" failed. No retries permitted until 2026-02-16 11:21:21.625277775 +0000 UTC m=+876.345462755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-metrics-certs") pod "frr-k8s-57ng7" (UID: "ae064aa9-f20f-4271-80aa-4df1aa1ecd35") : secret "frr-k8s-certs-secret" not found Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.126007 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-frr-startup\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.126195 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-frr-sockets\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.145891 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn2bl\" (UniqueName: \"kubernetes.io/projected/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-kube-api-access-mn2bl\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.146005 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt59v\" (UniqueName: \"kubernetes.io/projected/a890db88-edf0-48b0-82e7-f83d8d762493-kube-api-access-dt59v\") pod \"frr-k8s-webhook-server-78b44bf5bb-fl28v\" (UID: \"a890db88-edf0-48b0-82e7-f83d8d762493\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.225481 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w7xq\" (UniqueName: \"kubernetes.io/projected/b451686c-e089-48e1-82a2-1a889e465691-kube-api-access-4w7xq\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.225792 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b451686c-e089-48e1-82a2-1a889e465691-metallb-excludel2\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.225868 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99dd2e9f-adf7-4fe6-861b-d66125f5b08c-metrics-certs\") pod \"controller-69bbfbf88f-99x9f\" (UID: \"99dd2e9f-adf7-4fe6-861b-d66125f5b08c\") " pod="metallb-system/controller-69bbfbf88f-99x9f" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.225918 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jqxw\" (UniqueName: \"kubernetes.io/projected/99dd2e9f-adf7-4fe6-861b-d66125f5b08c-kube-api-access-7jqxw\") pod \"controller-69bbfbf88f-99x9f\" (UID: \"99dd2e9f-adf7-4fe6-861b-d66125f5b08c\") " pod="metallb-system/controller-69bbfbf88f-99x9f" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.225959 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b451686c-e089-48e1-82a2-1a889e465691-metrics-certs\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.225996 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b451686c-e089-48e1-82a2-1a889e465691-memberlist\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.226033 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99dd2e9f-adf7-4fe6-861b-d66125f5b08c-cert\") pod \"controller-69bbfbf88f-99x9f\" (UID: \"99dd2e9f-adf7-4fe6-861b-d66125f5b08c\") " pod="metallb-system/controller-69bbfbf88f-99x9f" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.327530 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b451686c-e089-48e1-82a2-1a889e465691-memberlist\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.327642 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99dd2e9f-adf7-4fe6-861b-d66125f5b08c-cert\") pod \"controller-69bbfbf88f-99x9f\" (UID: \"99dd2e9f-adf7-4fe6-861b-d66125f5b08c\") " pod="metallb-system/controller-69bbfbf88f-99x9f" Feb 16 11:21:21 crc kubenswrapper[4797]: E0216 11:21:21.327722 4797 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 11:21:21 crc kubenswrapper[4797]: E0216 11:21:21.327812 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b451686c-e089-48e1-82a2-1a889e465691-memberlist podName:b451686c-e089-48e1-82a2-1a889e465691 nodeName:}" failed. No retries permitted until 2026-02-16 11:21:21.827793699 +0000 UTC m=+876.547978679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b451686c-e089-48e1-82a2-1a889e465691-memberlist") pod "speaker-q95br" (UID: "b451686c-e089-48e1-82a2-1a889e465691") : secret "metallb-memberlist" not found Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.327749 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w7xq\" (UniqueName: \"kubernetes.io/projected/b451686c-e089-48e1-82a2-1a889e465691-kube-api-access-4w7xq\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.327879 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b451686c-e089-48e1-82a2-1a889e465691-metallb-excludel2\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.327952 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99dd2e9f-adf7-4fe6-861b-d66125f5b08c-metrics-certs\") pod \"controller-69bbfbf88f-99x9f\" (UID: \"99dd2e9f-adf7-4fe6-861b-d66125f5b08c\") " pod="metallb-system/controller-69bbfbf88f-99x9f" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.327974 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jqxw\" (UniqueName: \"kubernetes.io/projected/99dd2e9f-adf7-4fe6-861b-d66125f5b08c-kube-api-access-7jqxw\") pod \"controller-69bbfbf88f-99x9f\" (UID: \"99dd2e9f-adf7-4fe6-861b-d66125f5b08c\") " pod="metallb-system/controller-69bbfbf88f-99x9f" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.328018 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b451686c-e089-48e1-82a2-1a889e465691-metrics-certs\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.329016 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b451686c-e089-48e1-82a2-1a889e465691-metallb-excludel2\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.330069 4797 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.331733 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/99dd2e9f-adf7-4fe6-861b-d66125f5b08c-metrics-certs\") pod \"controller-69bbfbf88f-99x9f\" (UID: \"99dd2e9f-adf7-4fe6-861b-d66125f5b08c\") " pod="metallb-system/controller-69bbfbf88f-99x9f" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.332770 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b451686c-e089-48e1-82a2-1a889e465691-metrics-certs\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.342012 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/99dd2e9f-adf7-4fe6-861b-d66125f5b08c-cert\") pod \"controller-69bbfbf88f-99x9f\" (UID: \"99dd2e9f-adf7-4fe6-861b-d66125f5b08c\") " pod="metallb-system/controller-69bbfbf88f-99x9f" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.352845 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w7xq\" (UniqueName: \"kubernetes.io/projected/b451686c-e089-48e1-82a2-1a889e465691-kube-api-access-4w7xq\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.356277 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jqxw\" (UniqueName: \"kubernetes.io/projected/99dd2e9f-adf7-4fe6-861b-d66125f5b08c-kube-api-access-7jqxw\") pod \"controller-69bbfbf88f-99x9f\" (UID: \"99dd2e9f-adf7-4fe6-861b-d66125f5b08c\") " pod="metallb-system/controller-69bbfbf88f-99x9f" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.400880 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-99x9f" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.633375 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a890db88-edf0-48b0-82e7-f83d8d762493-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-fl28v\" (UID: \"a890db88-edf0-48b0-82e7-f83d8d762493\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.633443 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-metrics-certs\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.641245 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a890db88-edf0-48b0-82e7-f83d8d762493-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-fl28v\" (UID: \"a890db88-edf0-48b0-82e7-f83d8d762493\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.641293 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae064aa9-f20f-4271-80aa-4df1aa1ecd35-metrics-certs\") pod \"frr-k8s-57ng7\" (UID: \"ae064aa9-f20f-4271-80aa-4df1aa1ecd35\") " pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.834654 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b451686c-e089-48e1-82a2-1a889e465691-memberlist\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:21 crc kubenswrapper[4797]: E0216 11:21:21.834794 4797 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 11:21:21 crc kubenswrapper[4797]: E0216 11:21:21.834849 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b451686c-e089-48e1-82a2-1a889e465691-memberlist podName:b451686c-e089-48e1-82a2-1a889e465691 nodeName:}" failed. No retries permitted until 2026-02-16 11:21:22.834833283 +0000 UTC m=+877.555018263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b451686c-e089-48e1-82a2-1a889e465691-memberlist") pod "speaker-q95br" (UID: "b451686c-e089-48e1-82a2-1a889e465691") : secret "metallb-memberlist" not found Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.869122 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-99x9f"] Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.910275 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:21 crc kubenswrapper[4797]: I0216 11:21:21.920099 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" Feb 16 11:21:22 crc kubenswrapper[4797]: I0216 11:21:22.159253 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v"] Feb 16 11:21:22 crc kubenswrapper[4797]: I0216 11:21:22.476094 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" event={"ID":"a890db88-edf0-48b0-82e7-f83d8d762493","Type":"ContainerStarted","Data":"509911d5a1862a84bf747c07e13e6fc51e530a77e2faee919e29c484a3434f1e"} Feb 16 11:21:22 crc kubenswrapper[4797]: I0216 11:21:22.478150 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-99x9f" event={"ID":"99dd2e9f-adf7-4fe6-861b-d66125f5b08c","Type":"ContainerStarted","Data":"1253922686fa750c99123a9e76510f160c7b3cd4d9af8dec579bb8da21cce12b"} Feb 16 11:21:22 crc kubenswrapper[4797]: I0216 11:21:22.478200 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-99x9f" event={"ID":"99dd2e9f-adf7-4fe6-861b-d66125f5b08c","Type":"ContainerStarted","Data":"1721702ad3c4c2a9552c86de252c07d296e4a04f6633ed4e2f5bd7a6c3cf5593"} Feb 16 11:21:22 crc kubenswrapper[4797]: I0216 11:21:22.478215 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-99x9f" event={"ID":"99dd2e9f-adf7-4fe6-861b-d66125f5b08c","Type":"ContainerStarted","Data":"8ae6ac673b2f3bd58eaa181208fbfe8ec1b794b4055b8650dbb4ae9259a375a3"} Feb 16 11:21:22 crc kubenswrapper[4797]: I0216 11:21:22.478291 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-99x9f" Feb 16 11:21:22 crc kubenswrapper[4797]: I0216 11:21:22.479131 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-57ng7" event={"ID":"ae064aa9-f20f-4271-80aa-4df1aa1ecd35","Type":"ContainerStarted","Data":"5af0fdbb90d7d4e361c5b2e30d5a27e2b44d7398509d9da1d0da68cf2dffc58a"} Feb 16 11:21:22 crc kubenswrapper[4797]: I0216 11:21:22.495326 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-99x9f" podStartSLOduration=1.495302723 podStartE2EDuration="1.495302723s" podCreationTimestamp="2026-02-16 11:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:21:22.494121481 +0000 UTC m=+877.214306471" watchObservedRunningTime="2026-02-16 11:21:22.495302723 +0000 UTC m=+877.215487723" Feb 16 11:21:22 crc kubenswrapper[4797]: I0216 11:21:22.851507 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b451686c-e089-48e1-82a2-1a889e465691-memberlist\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:22 crc kubenswrapper[4797]: I0216 11:21:22.874917 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b451686c-e089-48e1-82a2-1a889e465691-memberlist\") pod \"speaker-q95br\" (UID: \"b451686c-e089-48e1-82a2-1a889e465691\") " pod="metallb-system/speaker-q95br" Feb 16 11:21:22 crc kubenswrapper[4797]: I0216 11:21:22.890059 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-q95br" Feb 16 11:21:22 crc kubenswrapper[4797]: W0216 11:21:22.921880 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb451686c_e089_48e1_82a2_1a889e465691.slice/crio-b0f64ee1410077c2928ee14384900128ad8b8d4480125d5f89cb160a938f437b WatchSource:0}: Error finding container b0f64ee1410077c2928ee14384900128ad8b8d4480125d5f89cb160a938f437b: Status 404 returned error can't find the container with id b0f64ee1410077c2928ee14384900128ad8b8d4480125d5f89cb160a938f437b Feb 16 11:21:23 crc kubenswrapper[4797]: I0216 11:21:23.494015 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q95br" event={"ID":"b451686c-e089-48e1-82a2-1a889e465691","Type":"ContainerStarted","Data":"abfb5acfbe86a7868baff7b82bf96220fa949b2c48ca1a72f274ede12aa41275"} Feb 16 11:21:23 crc kubenswrapper[4797]: I0216 11:21:23.494105 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q95br" event={"ID":"b451686c-e089-48e1-82a2-1a889e465691","Type":"ContainerStarted","Data":"31f757febdc6a076120594228f6f308491bf1ac810f55816ddc8a2743cb8e80f"} Feb 16 11:21:23 crc kubenswrapper[4797]: I0216 11:21:23.494118 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-q95br" event={"ID":"b451686c-e089-48e1-82a2-1a889e465691","Type":"ContainerStarted","Data":"b0f64ee1410077c2928ee14384900128ad8b8d4480125d5f89cb160a938f437b"} Feb 16 11:21:23 crc kubenswrapper[4797]: I0216 11:21:23.494709 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-q95br" Feb 16 11:21:23 crc kubenswrapper[4797]: I0216 11:21:23.531176 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-q95br" podStartSLOduration=2.5311531240000003 podStartE2EDuration="2.531153124s" podCreationTimestamp="2026-02-16 11:21:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:21:23.525193402 +0000 UTC m=+878.245378392" watchObservedRunningTime="2026-02-16 11:21:23.531153124 +0000 UTC m=+878.251338104" Feb 16 11:21:25 crc kubenswrapper[4797]: I0216 11:21:25.630391 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-64p74"] Feb 16 11:21:25 crc kubenswrapper[4797]: I0216 11:21:25.632303 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:25 crc kubenswrapper[4797]: I0216 11:21:25.657186 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-64p74"] Feb 16 11:21:25 crc kubenswrapper[4797]: I0216 11:21:25.692477 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/966378c2-8156-46b4-bf88-8f2a97a96067-catalog-content\") pod \"certified-operators-64p74\" (UID: \"966378c2-8156-46b4-bf88-8f2a97a96067\") " pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:25 crc kubenswrapper[4797]: I0216 11:21:25.692561 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/966378c2-8156-46b4-bf88-8f2a97a96067-utilities\") pod \"certified-operators-64p74\" (UID: \"966378c2-8156-46b4-bf88-8f2a97a96067\") " pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:25 crc kubenswrapper[4797]: I0216 11:21:25.692743 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw29d\" (UniqueName: \"kubernetes.io/projected/966378c2-8156-46b4-bf88-8f2a97a96067-kube-api-access-bw29d\") pod \"certified-operators-64p74\" (UID: \"966378c2-8156-46b4-bf88-8f2a97a96067\") " pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:25 crc kubenswrapper[4797]: I0216 11:21:25.794158 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw29d\" (UniqueName: \"kubernetes.io/projected/966378c2-8156-46b4-bf88-8f2a97a96067-kube-api-access-bw29d\") pod \"certified-operators-64p74\" (UID: \"966378c2-8156-46b4-bf88-8f2a97a96067\") " pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:25 crc kubenswrapper[4797]: I0216 11:21:25.794913 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/966378c2-8156-46b4-bf88-8f2a97a96067-catalog-content\") pod \"certified-operators-64p74\" (UID: \"966378c2-8156-46b4-bf88-8f2a97a96067\") " pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:25 crc kubenswrapper[4797]: I0216 11:21:25.795501 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/966378c2-8156-46b4-bf88-8f2a97a96067-catalog-content\") pod \"certified-operators-64p74\" (UID: \"966378c2-8156-46b4-bf88-8f2a97a96067\") " pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:25 crc kubenswrapper[4797]: I0216 11:21:25.795697 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/966378c2-8156-46b4-bf88-8f2a97a96067-utilities\") pod \"certified-operators-64p74\" (UID: \"966378c2-8156-46b4-bf88-8f2a97a96067\") " pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:25 crc kubenswrapper[4797]: I0216 11:21:25.797209 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/966378c2-8156-46b4-bf88-8f2a97a96067-utilities\") pod \"certified-operators-64p74\" (UID: \"966378c2-8156-46b4-bf88-8f2a97a96067\") " pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:25 crc kubenswrapper[4797]: I0216 11:21:25.824726 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw29d\" (UniqueName: \"kubernetes.io/projected/966378c2-8156-46b4-bf88-8f2a97a96067-kube-api-access-bw29d\") pod \"certified-operators-64p74\" (UID: \"966378c2-8156-46b4-bf88-8f2a97a96067\") " pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:25 crc kubenswrapper[4797]: I0216 11:21:25.953223 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:26 crc kubenswrapper[4797]: W0216 11:21:26.533027 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod966378c2_8156_46b4_bf88_8f2a97a96067.slice/crio-095371bc5fa9df6f3c8a1acced05fb5f96ddbe90339208c8bb574adcf57c6d81 WatchSource:0}: Error finding container 095371bc5fa9df6f3c8a1acced05fb5f96ddbe90339208c8bb574adcf57c6d81: Status 404 returned error can't find the container with id 095371bc5fa9df6f3c8a1acced05fb5f96ddbe90339208c8bb574adcf57c6d81 Feb 16 11:21:26 crc kubenswrapper[4797]: I0216 11:21:26.553502 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-64p74"] Feb 16 11:21:27 crc kubenswrapper[4797]: I0216 11:21:27.560717 4797 generic.go:334] "Generic (PLEG): container finished" podID="966378c2-8156-46b4-bf88-8f2a97a96067" containerID="755d8d6050349e4e8944de42c36f962631de48e7b31b00b4b5b3e4b9f8f9d04e" exitCode=0 Feb 16 11:21:27 crc kubenswrapper[4797]: I0216 11:21:27.560825 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64p74" event={"ID":"966378c2-8156-46b4-bf88-8f2a97a96067","Type":"ContainerDied","Data":"755d8d6050349e4e8944de42c36f962631de48e7b31b00b4b5b3e4b9f8f9d04e"} Feb 16 11:21:27 crc kubenswrapper[4797]: I0216 11:21:27.561070 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64p74" event={"ID":"966378c2-8156-46b4-bf88-8f2a97a96067","Type":"ContainerStarted","Data":"095371bc5fa9df6f3c8a1acced05fb5f96ddbe90339208c8bb574adcf57c6d81"} Feb 16 11:21:31 crc kubenswrapper[4797]: I0216 11:21:31.594751 4797 generic.go:334] "Generic (PLEG): container finished" podID="966378c2-8156-46b4-bf88-8f2a97a96067" containerID="1e30779756e5a6c28200b62e4576a7d89f2afa6cbe9dacbeb6dc21b1cfc18d4c" exitCode=0 Feb 16 11:21:31 crc kubenswrapper[4797]: I0216 11:21:31.594903 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64p74" event={"ID":"966378c2-8156-46b4-bf88-8f2a97a96067","Type":"ContainerDied","Data":"1e30779756e5a6c28200b62e4576a7d89f2afa6cbe9dacbeb6dc21b1cfc18d4c"} Feb 16 11:21:31 crc kubenswrapper[4797]: I0216 11:21:31.603462 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" event={"ID":"a890db88-edf0-48b0-82e7-f83d8d762493","Type":"ContainerStarted","Data":"78a9ecec30ebf06343c7f544c4c3e31438a64d12688b7cd1aa96589cd8cbba0e"} Feb 16 11:21:31 crc kubenswrapper[4797]: I0216 11:21:31.603634 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" Feb 16 11:21:31 crc kubenswrapper[4797]: I0216 11:21:31.606078 4797 generic.go:334] "Generic (PLEG): container finished" podID="ae064aa9-f20f-4271-80aa-4df1aa1ecd35" containerID="7d39e8ef1fd3ac215a3e3f02e64df8d134745ba91c1a41b20bf2bbd94daff513" exitCode=0 Feb 16 11:21:31 crc kubenswrapper[4797]: I0216 11:21:31.606109 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-57ng7" event={"ID":"ae064aa9-f20f-4271-80aa-4df1aa1ecd35","Type":"ContainerDied","Data":"7d39e8ef1fd3ac215a3e3f02e64df8d134745ba91c1a41b20bf2bbd94daff513"} Feb 16 11:21:31 crc kubenswrapper[4797]: I0216 11:21:31.727197 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" podStartSLOduration=3.269776448 podStartE2EDuration="11.727178657s" podCreationTimestamp="2026-02-16 11:21:20 +0000 UTC" firstStartedPulling="2026-02-16 11:21:22.171010345 +0000 UTC m=+876.891195325" lastFinishedPulling="2026-02-16 11:21:30.628412554 +0000 UTC m=+885.348597534" observedRunningTime="2026-02-16 11:21:31.726087847 +0000 UTC m=+886.446272837" watchObservedRunningTime="2026-02-16 11:21:31.727178657 +0000 UTC m=+886.447363637" Feb 16 11:21:32 crc kubenswrapper[4797]: I0216 11:21:32.622024 4797 generic.go:334] "Generic (PLEG): container finished" podID="ae064aa9-f20f-4271-80aa-4df1aa1ecd35" containerID="c0469dd8c4f6a0ea60f12a56f198a66fb00d5f994003291aeb58fdc471944f76" exitCode=0 Feb 16 11:21:32 crc kubenswrapper[4797]: I0216 11:21:32.622106 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-57ng7" event={"ID":"ae064aa9-f20f-4271-80aa-4df1aa1ecd35","Type":"ContainerDied","Data":"c0469dd8c4f6a0ea60f12a56f198a66fb00d5f994003291aeb58fdc471944f76"} Feb 16 11:21:32 crc kubenswrapper[4797]: I0216 11:21:32.627111 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64p74" event={"ID":"966378c2-8156-46b4-bf88-8f2a97a96067","Type":"ContainerStarted","Data":"b4a50cf9a1b3f8c92a05cc367db7dce9dea7dbbdbe8df34829f0b454c18329af"} Feb 16 11:21:32 crc kubenswrapper[4797]: I0216 11:21:32.685861 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-64p74" podStartSLOduration=5.075729208 podStartE2EDuration="7.685837136s" podCreationTimestamp="2026-02-16 11:21:25 +0000 UTC" firstStartedPulling="2026-02-16 11:21:29.502003808 +0000 UTC m=+884.222188788" lastFinishedPulling="2026-02-16 11:21:32.112111736 +0000 UTC m=+886.832296716" observedRunningTime="2026-02-16 11:21:32.678511337 +0000 UTC m=+887.398696317" watchObservedRunningTime="2026-02-16 11:21:32.685837136 +0000 UTC m=+887.406022116" Feb 16 11:21:33 crc kubenswrapper[4797]: I0216 11:21:33.636618 4797 generic.go:334] "Generic (PLEG): container finished" podID="ae064aa9-f20f-4271-80aa-4df1aa1ecd35" containerID="2aa116ac057a6b22a2ffae4b845fd2607c6e0812e16caa28ea404ad739f88517" exitCode=0 Feb 16 11:21:33 crc kubenswrapper[4797]: I0216 11:21:33.636709 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-57ng7" event={"ID":"ae064aa9-f20f-4271-80aa-4df1aa1ecd35","Type":"ContainerDied","Data":"2aa116ac057a6b22a2ffae4b845fd2607c6e0812e16caa28ea404ad739f88517"} Feb 16 11:21:34 crc kubenswrapper[4797]: I0216 11:21:34.649612 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-57ng7" event={"ID":"ae064aa9-f20f-4271-80aa-4df1aa1ecd35","Type":"ContainerStarted","Data":"72df51bf9cc7721f1c8a887a57bb6302347a965a3950e07532f80fd0e506b1d5"} Feb 16 11:21:34 crc kubenswrapper[4797]: I0216 11:21:34.649845 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-57ng7" event={"ID":"ae064aa9-f20f-4271-80aa-4df1aa1ecd35","Type":"ContainerStarted","Data":"3de00b1d4f2b536f4bc24c920a61d61c2cd2400f4e79c1bc1c662eb1b5f0123c"} Feb 16 11:21:34 crc kubenswrapper[4797]: I0216 11:21:34.649856 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-57ng7" event={"ID":"ae064aa9-f20f-4271-80aa-4df1aa1ecd35","Type":"ContainerStarted","Data":"0e22b958bd76dd88925e004103d4146ce0b33f13a2cfa629a5ca0cf8e7ec3c7a"} Feb 16 11:21:34 crc kubenswrapper[4797]: I0216 11:21:34.649864 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-57ng7" event={"ID":"ae064aa9-f20f-4271-80aa-4df1aa1ecd35","Type":"ContainerStarted","Data":"8673130610aad892b674173a85741b60b4cc32e3611d1db988278f0bbd61407c"} Feb 16 11:21:34 crc kubenswrapper[4797]: I0216 11:21:34.649873 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-57ng7" event={"ID":"ae064aa9-f20f-4271-80aa-4df1aa1ecd35","Type":"ContainerStarted","Data":"aaa6584480d73e28d34bb0bac2ddbaef81e11876a8cd3faee383d286c44bb61d"} Feb 16 11:21:35 crc kubenswrapper[4797]: I0216 11:21:35.669954 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-57ng7" event={"ID":"ae064aa9-f20f-4271-80aa-4df1aa1ecd35","Type":"ContainerStarted","Data":"4a3fb12b33d11e6709df3dba4225a4f564b75860be64cef3b0fcf5b0bc8c3d12"} Feb 16 11:21:35 crc kubenswrapper[4797]: I0216 11:21:35.671101 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:35 crc kubenswrapper[4797]: I0216 11:21:35.707924 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-57ng7" podStartSLOduration=7.120210694 podStartE2EDuration="15.70790442s" podCreationTimestamp="2026-02-16 11:21:20 +0000 UTC" firstStartedPulling="2026-02-16 11:21:22.040766249 +0000 UTC m=+876.760951229" lastFinishedPulling="2026-02-16 11:21:30.628459965 +0000 UTC m=+885.348644955" observedRunningTime="2026-02-16 11:21:35.701046334 +0000 UTC m=+890.421231334" watchObservedRunningTime="2026-02-16 11:21:35.70790442 +0000 UTC m=+890.428089410" Feb 16 11:21:35 crc kubenswrapper[4797]: I0216 11:21:35.954017 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:35 crc kubenswrapper[4797]: I0216 11:21:35.954451 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:36 crc kubenswrapper[4797]: I0216 11:21:36.018305 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:36 crc kubenswrapper[4797]: I0216 11:21:36.911656 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:36 crc kubenswrapper[4797]: I0216 11:21:36.951733 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:37 crc kubenswrapper[4797]: I0216 11:21:37.752248 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:37 crc kubenswrapper[4797]: I0216 11:21:37.806491 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-64p74"] Feb 16 11:21:39 crc kubenswrapper[4797]: I0216 11:21:39.704919 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-64p74" podUID="966378c2-8156-46b4-bf88-8f2a97a96067" containerName="registry-server" containerID="cri-o://b4a50cf9a1b3f8c92a05cc367db7dce9dea7dbbdbe8df34829f0b454c18329af" gracePeriod=2 Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.082080 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.218431 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/966378c2-8156-46b4-bf88-8f2a97a96067-catalog-content\") pod \"966378c2-8156-46b4-bf88-8f2a97a96067\" (UID: \"966378c2-8156-46b4-bf88-8f2a97a96067\") " Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.219134 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw29d\" (UniqueName: \"kubernetes.io/projected/966378c2-8156-46b4-bf88-8f2a97a96067-kube-api-access-bw29d\") pod \"966378c2-8156-46b4-bf88-8f2a97a96067\" (UID: \"966378c2-8156-46b4-bf88-8f2a97a96067\") " Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.219269 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/966378c2-8156-46b4-bf88-8f2a97a96067-utilities\") pod \"966378c2-8156-46b4-bf88-8f2a97a96067\" (UID: \"966378c2-8156-46b4-bf88-8f2a97a96067\") " Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.220170 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/966378c2-8156-46b4-bf88-8f2a97a96067-utilities" (OuterVolumeSpecName: "utilities") pod "966378c2-8156-46b4-bf88-8f2a97a96067" (UID: "966378c2-8156-46b4-bf88-8f2a97a96067"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.230043 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/966378c2-8156-46b4-bf88-8f2a97a96067-kube-api-access-bw29d" (OuterVolumeSpecName: "kube-api-access-bw29d") pod "966378c2-8156-46b4-bf88-8f2a97a96067" (UID: "966378c2-8156-46b4-bf88-8f2a97a96067"). InnerVolumeSpecName "kube-api-access-bw29d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.279276 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/966378c2-8156-46b4-bf88-8f2a97a96067-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "966378c2-8156-46b4-bf88-8f2a97a96067" (UID: "966378c2-8156-46b4-bf88-8f2a97a96067"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.320626 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw29d\" (UniqueName: \"kubernetes.io/projected/966378c2-8156-46b4-bf88-8f2a97a96067-kube-api-access-bw29d\") on node \"crc\" DevicePath \"\"" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.320672 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/966378c2-8156-46b4-bf88-8f2a97a96067-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.320686 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/966378c2-8156-46b4-bf88-8f2a97a96067-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.717227 4797 generic.go:334] "Generic (PLEG): container finished" podID="966378c2-8156-46b4-bf88-8f2a97a96067" containerID="b4a50cf9a1b3f8c92a05cc367db7dce9dea7dbbdbe8df34829f0b454c18329af" exitCode=0 Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.717275 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64p74" event={"ID":"966378c2-8156-46b4-bf88-8f2a97a96067","Type":"ContainerDied","Data":"b4a50cf9a1b3f8c92a05cc367db7dce9dea7dbbdbe8df34829f0b454c18329af"} Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.717305 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64p74" event={"ID":"966378c2-8156-46b4-bf88-8f2a97a96067","Type":"ContainerDied","Data":"095371bc5fa9df6f3c8a1acced05fb5f96ddbe90339208c8bb574adcf57c6d81"} Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.717328 4797 scope.go:117] "RemoveContainer" containerID="b4a50cf9a1b3f8c92a05cc367db7dce9dea7dbbdbe8df34829f0b454c18329af" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.717469 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64p74" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.745019 4797 scope.go:117] "RemoveContainer" containerID="1e30779756e5a6c28200b62e4576a7d89f2afa6cbe9dacbeb6dc21b1cfc18d4c" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.759116 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-64p74"] Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.766436 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-64p74"] Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.779891 4797 scope.go:117] "RemoveContainer" containerID="755d8d6050349e4e8944de42c36f962631de48e7b31b00b4b5b3e4b9f8f9d04e" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.795876 4797 scope.go:117] "RemoveContainer" containerID="b4a50cf9a1b3f8c92a05cc367db7dce9dea7dbbdbe8df34829f0b454c18329af" Feb 16 11:21:40 crc kubenswrapper[4797]: E0216 11:21:40.796513 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4a50cf9a1b3f8c92a05cc367db7dce9dea7dbbdbe8df34829f0b454c18329af\": container with ID starting with b4a50cf9a1b3f8c92a05cc367db7dce9dea7dbbdbe8df34829f0b454c18329af not found: ID does not exist" containerID="b4a50cf9a1b3f8c92a05cc367db7dce9dea7dbbdbe8df34829f0b454c18329af" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.796562 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a50cf9a1b3f8c92a05cc367db7dce9dea7dbbdbe8df34829f0b454c18329af"} err="failed to get container status \"b4a50cf9a1b3f8c92a05cc367db7dce9dea7dbbdbe8df34829f0b454c18329af\": rpc error: code = NotFound desc = could not find container \"b4a50cf9a1b3f8c92a05cc367db7dce9dea7dbbdbe8df34829f0b454c18329af\": container with ID starting with b4a50cf9a1b3f8c92a05cc367db7dce9dea7dbbdbe8df34829f0b454c18329af not found: ID does not exist" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.796617 4797 scope.go:117] "RemoveContainer" containerID="1e30779756e5a6c28200b62e4576a7d89f2afa6cbe9dacbeb6dc21b1cfc18d4c" Feb 16 11:21:40 crc kubenswrapper[4797]: E0216 11:21:40.797525 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e30779756e5a6c28200b62e4576a7d89f2afa6cbe9dacbeb6dc21b1cfc18d4c\": container with ID starting with 1e30779756e5a6c28200b62e4576a7d89f2afa6cbe9dacbeb6dc21b1cfc18d4c not found: ID does not exist" containerID="1e30779756e5a6c28200b62e4576a7d89f2afa6cbe9dacbeb6dc21b1cfc18d4c" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.797550 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e30779756e5a6c28200b62e4576a7d89f2afa6cbe9dacbeb6dc21b1cfc18d4c"} err="failed to get container status \"1e30779756e5a6c28200b62e4576a7d89f2afa6cbe9dacbeb6dc21b1cfc18d4c\": rpc error: code = NotFound desc = could not find container \"1e30779756e5a6c28200b62e4576a7d89f2afa6cbe9dacbeb6dc21b1cfc18d4c\": container with ID starting with 1e30779756e5a6c28200b62e4576a7d89f2afa6cbe9dacbeb6dc21b1cfc18d4c not found: ID does not exist" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.797563 4797 scope.go:117] "RemoveContainer" containerID="755d8d6050349e4e8944de42c36f962631de48e7b31b00b4b5b3e4b9f8f9d04e" Feb 16 11:21:40 crc kubenswrapper[4797]: E0216 11:21:40.798168 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"755d8d6050349e4e8944de42c36f962631de48e7b31b00b4b5b3e4b9f8f9d04e\": container with ID starting with 755d8d6050349e4e8944de42c36f962631de48e7b31b00b4b5b3e4b9f8f9d04e not found: ID does not exist" containerID="755d8d6050349e4e8944de42c36f962631de48e7b31b00b4b5b3e4b9f8f9d04e" Feb 16 11:21:40 crc kubenswrapper[4797]: I0216 11:21:40.798196 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"755d8d6050349e4e8944de42c36f962631de48e7b31b00b4b5b3e4b9f8f9d04e"} err="failed to get container status \"755d8d6050349e4e8944de42c36f962631de48e7b31b00b4b5b3e4b9f8f9d04e\": rpc error: code = NotFound desc = could not find container \"755d8d6050349e4e8944de42c36f962631de48e7b31b00b4b5b3e4b9f8f9d04e\": container with ID starting with 755d8d6050349e4e8944de42c36f962631de48e7b31b00b4b5b3e4b9f8f9d04e not found: ID does not exist" Feb 16 11:21:41 crc kubenswrapper[4797]: I0216 11:21:41.405213 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-99x9f" Feb 16 11:21:41 crc kubenswrapper[4797]: I0216 11:21:41.929059 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-fl28v" Feb 16 11:21:42 crc kubenswrapper[4797]: I0216 11:21:42.001975 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="966378c2-8156-46b4-bf88-8f2a97a96067" path="/var/lib/kubelet/pods/966378c2-8156-46b4-bf88-8f2a97a96067/volumes" Feb 16 11:21:42 crc kubenswrapper[4797]: I0216 11:21:42.895765 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-q95br" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.625195 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-26c6f"] Feb 16 11:21:45 crc kubenswrapper[4797]: E0216 11:21:45.625506 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966378c2-8156-46b4-bf88-8f2a97a96067" containerName="extract-utilities" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.625525 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="966378c2-8156-46b4-bf88-8f2a97a96067" containerName="extract-utilities" Feb 16 11:21:45 crc kubenswrapper[4797]: E0216 11:21:45.625547 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966378c2-8156-46b4-bf88-8f2a97a96067" containerName="registry-server" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.625555 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="966378c2-8156-46b4-bf88-8f2a97a96067" containerName="registry-server" Feb 16 11:21:45 crc kubenswrapper[4797]: E0216 11:21:45.625627 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="966378c2-8156-46b4-bf88-8f2a97a96067" containerName="extract-content" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.625637 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="966378c2-8156-46b4-bf88-8f2a97a96067" containerName="extract-content" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.625782 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="966378c2-8156-46b4-bf88-8f2a97a96067" containerName="registry-server" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.626288 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-26c6f" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.631902 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.632180 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ms4vt" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.636969 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.654911 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-26c6f"] Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.791941 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjrgr\" (UniqueName: \"kubernetes.io/projected/cb98c0d7-e509-4fbc-b751-d544b2d2ecd8-kube-api-access-pjrgr\") pod \"openstack-operator-index-26c6f\" (UID: \"cb98c0d7-e509-4fbc-b751-d544b2d2ecd8\") " pod="openstack-operators/openstack-operator-index-26c6f" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.892836 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjrgr\" (UniqueName: \"kubernetes.io/projected/cb98c0d7-e509-4fbc-b751-d544b2d2ecd8-kube-api-access-pjrgr\") pod \"openstack-operator-index-26c6f\" (UID: \"cb98c0d7-e509-4fbc-b751-d544b2d2ecd8\") " pod="openstack-operators/openstack-operator-index-26c6f" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.912058 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjrgr\" (UniqueName: \"kubernetes.io/projected/cb98c0d7-e509-4fbc-b751-d544b2d2ecd8-kube-api-access-pjrgr\") pod \"openstack-operator-index-26c6f\" (UID: \"cb98c0d7-e509-4fbc-b751-d544b2d2ecd8\") " pod="openstack-operators/openstack-operator-index-26c6f" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.948199 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-ms4vt" Feb 16 11:21:45 crc kubenswrapper[4797]: I0216 11:21:45.956213 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-26c6f" Feb 16 11:21:46 crc kubenswrapper[4797]: I0216 11:21:46.414180 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-26c6f"] Feb 16 11:21:46 crc kubenswrapper[4797]: W0216 11:21:46.417566 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb98c0d7_e509_4fbc_b751_d544b2d2ecd8.slice/crio-d7702bf9da6d880985abe94b9580f2aa9c955d33c76a80bc377eeac8db512ae6 WatchSource:0}: Error finding container d7702bf9da6d880985abe94b9580f2aa9c955d33c76a80bc377eeac8db512ae6: Status 404 returned error can't find the container with id d7702bf9da6d880985abe94b9580f2aa9c955d33c76a80bc377eeac8db512ae6 Feb 16 11:21:46 crc kubenswrapper[4797]: I0216 11:21:46.420080 4797 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 11:21:46 crc kubenswrapper[4797]: I0216 11:21:46.775449 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-26c6f" event={"ID":"cb98c0d7-e509-4fbc-b751-d544b2d2ecd8","Type":"ContainerStarted","Data":"d7702bf9da6d880985abe94b9580f2aa9c955d33c76a80bc377eeac8db512ae6"} Feb 16 11:21:48 crc kubenswrapper[4797]: I0216 11:21:48.395416 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-26c6f"] Feb 16 11:21:48 crc kubenswrapper[4797]: I0216 11:21:48.793846 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-26c6f" event={"ID":"cb98c0d7-e509-4fbc-b751-d544b2d2ecd8","Type":"ContainerStarted","Data":"7bbc27d99e35cbd3c92a8881313e98ea9a4e8a59d3dfe211dac3b25c3ae18bb9"} Feb 16 11:21:48 crc kubenswrapper[4797]: I0216 11:21:48.793952 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-26c6f" podUID="cb98c0d7-e509-4fbc-b751-d544b2d2ecd8" containerName="registry-server" containerID="cri-o://7bbc27d99e35cbd3c92a8881313e98ea9a4e8a59d3dfe211dac3b25c3ae18bb9" gracePeriod=2 Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.019531 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-26c6f" podStartSLOduration=2.031280648 podStartE2EDuration="4.019503276s" podCreationTimestamp="2026-02-16 11:21:45 +0000 UTC" firstStartedPulling="2026-02-16 11:21:46.419672397 +0000 UTC m=+901.139857377" lastFinishedPulling="2026-02-16 11:21:48.407895025 +0000 UTC m=+903.128080005" observedRunningTime="2026-02-16 11:21:48.823340476 +0000 UTC m=+903.543525936" watchObservedRunningTime="2026-02-16 11:21:49.019503276 +0000 UTC m=+903.739688296" Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.021352 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-knqtd"] Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.022661 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-knqtd" Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.039558 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-knqtd"] Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.149516 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4pfn\" (UniqueName: \"kubernetes.io/projected/92cbdcce-96ac-453d-88f4-67c63be7c272-kube-api-access-m4pfn\") pod \"openstack-operator-index-knqtd\" (UID: \"92cbdcce-96ac-453d-88f4-67c63be7c272\") " pod="openstack-operators/openstack-operator-index-knqtd" Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.242452 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-26c6f" Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.251237 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4pfn\" (UniqueName: \"kubernetes.io/projected/92cbdcce-96ac-453d-88f4-67c63be7c272-kube-api-access-m4pfn\") pod \"openstack-operator-index-knqtd\" (UID: \"92cbdcce-96ac-453d-88f4-67c63be7c272\") " pod="openstack-operators/openstack-operator-index-knqtd" Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.285436 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4pfn\" (UniqueName: \"kubernetes.io/projected/92cbdcce-96ac-453d-88f4-67c63be7c272-kube-api-access-m4pfn\") pod \"openstack-operator-index-knqtd\" (UID: \"92cbdcce-96ac-453d-88f4-67c63be7c272\") " pod="openstack-operators/openstack-operator-index-knqtd" Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.345622 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-knqtd" Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.352349 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjrgr\" (UniqueName: \"kubernetes.io/projected/cb98c0d7-e509-4fbc-b751-d544b2d2ecd8-kube-api-access-pjrgr\") pod \"cb98c0d7-e509-4fbc-b751-d544b2d2ecd8\" (UID: \"cb98c0d7-e509-4fbc-b751-d544b2d2ecd8\") " Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.357514 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb98c0d7-e509-4fbc-b751-d544b2d2ecd8-kube-api-access-pjrgr" (OuterVolumeSpecName: "kube-api-access-pjrgr") pod "cb98c0d7-e509-4fbc-b751-d544b2d2ecd8" (UID: "cb98c0d7-e509-4fbc-b751-d544b2d2ecd8"). InnerVolumeSpecName "kube-api-access-pjrgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.454182 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjrgr\" (UniqueName: \"kubernetes.io/projected/cb98c0d7-e509-4fbc-b751-d544b2d2ecd8-kube-api-access-pjrgr\") on node \"crc\" DevicePath \"\"" Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.761546 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-knqtd"] Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.803948 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-knqtd" event={"ID":"92cbdcce-96ac-453d-88f4-67c63be7c272","Type":"ContainerStarted","Data":"f5db817534b1facd2ad15b98583a8872a4539f83c9ca654c85c2e166eb2cb526"} Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.805451 4797 generic.go:334] "Generic (PLEG): container finished" podID="cb98c0d7-e509-4fbc-b751-d544b2d2ecd8" containerID="7bbc27d99e35cbd3c92a8881313e98ea9a4e8a59d3dfe211dac3b25c3ae18bb9" exitCode=0 Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.805523 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-26c6f" event={"ID":"cb98c0d7-e509-4fbc-b751-d544b2d2ecd8","Type":"ContainerDied","Data":"7bbc27d99e35cbd3c92a8881313e98ea9a4e8a59d3dfe211dac3b25c3ae18bb9"} Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.805611 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-26c6f" event={"ID":"cb98c0d7-e509-4fbc-b751-d544b2d2ecd8","Type":"ContainerDied","Data":"d7702bf9da6d880985abe94b9580f2aa9c955d33c76a80bc377eeac8db512ae6"} Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.805640 4797 scope.go:117] "RemoveContainer" containerID="7bbc27d99e35cbd3c92a8881313e98ea9a4e8a59d3dfe211dac3b25c3ae18bb9" Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.805547 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-26c6f" Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.836002 4797 scope.go:117] "RemoveContainer" containerID="7bbc27d99e35cbd3c92a8881313e98ea9a4e8a59d3dfe211dac3b25c3ae18bb9" Feb 16 11:21:49 crc kubenswrapper[4797]: E0216 11:21:49.836463 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bbc27d99e35cbd3c92a8881313e98ea9a4e8a59d3dfe211dac3b25c3ae18bb9\": container with ID starting with 7bbc27d99e35cbd3c92a8881313e98ea9a4e8a59d3dfe211dac3b25c3ae18bb9 not found: ID does not exist" containerID="7bbc27d99e35cbd3c92a8881313e98ea9a4e8a59d3dfe211dac3b25c3ae18bb9" Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.836507 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bbc27d99e35cbd3c92a8881313e98ea9a4e8a59d3dfe211dac3b25c3ae18bb9"} err="failed to get container status \"7bbc27d99e35cbd3c92a8881313e98ea9a4e8a59d3dfe211dac3b25c3ae18bb9\": rpc error: code = NotFound desc = could not find container \"7bbc27d99e35cbd3c92a8881313e98ea9a4e8a59d3dfe211dac3b25c3ae18bb9\": container with ID starting with 7bbc27d99e35cbd3c92a8881313e98ea9a4e8a59d3dfe211dac3b25c3ae18bb9 not found: ID does not exist" Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.857327 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-26c6f"] Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.864311 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-26c6f"] Feb 16 11:21:49 crc kubenswrapper[4797]: I0216 11:21:49.990657 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb98c0d7-e509-4fbc-b751-d544b2d2ecd8" path="/var/lib/kubelet/pods/cb98c0d7-e509-4fbc-b751-d544b2d2ecd8/volumes" Feb 16 11:21:50 crc kubenswrapper[4797]: I0216 11:21:50.818133 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-knqtd" event={"ID":"92cbdcce-96ac-453d-88f4-67c63be7c272","Type":"ContainerStarted","Data":"cf4bf16b7695a025018cbe9f53509926a66eeb2df34068637e87ec88ded3d7ac"} Feb 16 11:21:50 crc kubenswrapper[4797]: I0216 11:21:50.840274 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-knqtd" podStartSLOduration=2.772882311 podStartE2EDuration="2.840244575s" podCreationTimestamp="2026-02-16 11:21:48 +0000 UTC" firstStartedPulling="2026-02-16 11:21:49.769249517 +0000 UTC m=+904.489434497" lastFinishedPulling="2026-02-16 11:21:49.836611771 +0000 UTC m=+904.556796761" observedRunningTime="2026-02-16 11:21:50.838080756 +0000 UTC m=+905.558265766" watchObservedRunningTime="2026-02-16 11:21:50.840244575 +0000 UTC m=+905.560429625" Feb 16 11:21:51 crc kubenswrapper[4797]: I0216 11:21:51.915482 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-57ng7" Feb 16 11:21:59 crc kubenswrapper[4797]: I0216 11:21:59.346569 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-knqtd" Feb 16 11:21:59 crc kubenswrapper[4797]: I0216 11:21:59.348307 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-knqtd" Feb 16 11:21:59 crc kubenswrapper[4797]: I0216 11:21:59.394947 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-knqtd" Feb 16 11:21:59 crc kubenswrapper[4797]: I0216 11:21:59.911413 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-knqtd" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.255396 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87"] Feb 16 11:22:02 crc kubenswrapper[4797]: E0216 11:22:02.256028 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb98c0d7-e509-4fbc-b751-d544b2d2ecd8" containerName="registry-server" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.256041 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb98c0d7-e509-4fbc-b751-d544b2d2ecd8" containerName="registry-server" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.256160 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb98c0d7-e509-4fbc-b751-d544b2d2ecd8" containerName="registry-server" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.257508 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.260117 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-kbxnm" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.266309 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87"] Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.333742 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61f49bc4-6aee-43e9-8fc4-8380546e9da4-util\") pod \"fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87\" (UID: \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\") " pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.333804 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61f49bc4-6aee-43e9-8fc4-8380546e9da4-bundle\") pod \"fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87\" (UID: \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\") " pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.333827 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg57l\" (UniqueName: \"kubernetes.io/projected/61f49bc4-6aee-43e9-8fc4-8380546e9da4-kube-api-access-gg57l\") pod \"fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87\" (UID: \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\") " pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.434721 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61f49bc4-6aee-43e9-8fc4-8380546e9da4-util\") pod \"fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87\" (UID: \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\") " pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.434821 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61f49bc4-6aee-43e9-8fc4-8380546e9da4-bundle\") pod \"fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87\" (UID: \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\") " pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.434851 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg57l\" (UniqueName: \"kubernetes.io/projected/61f49bc4-6aee-43e9-8fc4-8380546e9da4-kube-api-access-gg57l\") pod \"fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87\" (UID: \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\") " pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.435251 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61f49bc4-6aee-43e9-8fc4-8380546e9da4-util\") pod \"fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87\" (UID: \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\") " pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.435305 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61f49bc4-6aee-43e9-8fc4-8380546e9da4-bundle\") pod \"fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87\" (UID: \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\") " pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.455799 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg57l\" (UniqueName: \"kubernetes.io/projected/61f49bc4-6aee-43e9-8fc4-8380546e9da4-kube-api-access-gg57l\") pod \"fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87\" (UID: \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\") " pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" Feb 16 11:22:02 crc kubenswrapper[4797]: I0216 11:22:02.582839 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" Feb 16 11:22:03 crc kubenswrapper[4797]: I0216 11:22:03.020338 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87"] Feb 16 11:22:03 crc kubenswrapper[4797]: I0216 11:22:03.914688 4797 generic.go:334] "Generic (PLEG): container finished" podID="61f49bc4-6aee-43e9-8fc4-8380546e9da4" containerID="2cbc1eba1c4d3e8aff5b2ce5fbe93dedc9e2bd8ffa0896ce4d343114efa5bc34" exitCode=0 Feb 16 11:22:03 crc kubenswrapper[4797]: I0216 11:22:03.914746 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" event={"ID":"61f49bc4-6aee-43e9-8fc4-8380546e9da4","Type":"ContainerDied","Data":"2cbc1eba1c4d3e8aff5b2ce5fbe93dedc9e2bd8ffa0896ce4d343114efa5bc34"} Feb 16 11:22:03 crc kubenswrapper[4797]: I0216 11:22:03.914958 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" event={"ID":"61f49bc4-6aee-43e9-8fc4-8380546e9da4","Type":"ContainerStarted","Data":"d69eefcb120fb7459d11370ccd9d60c77733f749a7a56ee3a8314a98afd5201c"} Feb 16 11:22:04 crc kubenswrapper[4797]: I0216 11:22:04.927427 4797 generic.go:334] "Generic (PLEG): container finished" podID="61f49bc4-6aee-43e9-8fc4-8380546e9da4" containerID="c820cd36315012c181b04bf7d5f9bf5be109ce04e55f90c084530cd526286696" exitCode=0 Feb 16 11:22:04 crc kubenswrapper[4797]: I0216 11:22:04.927468 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" event={"ID":"61f49bc4-6aee-43e9-8fc4-8380546e9da4","Type":"ContainerDied","Data":"c820cd36315012c181b04bf7d5f9bf5be109ce04e55f90c084530cd526286696"} Feb 16 11:22:05 crc kubenswrapper[4797]: I0216 11:22:05.939912 4797 generic.go:334] "Generic (PLEG): container finished" podID="61f49bc4-6aee-43e9-8fc4-8380546e9da4" containerID="609c30185db67b53b192ba29d48447e263134af8a9d9bb99c17a8cca6d66f4b3" exitCode=0 Feb 16 11:22:05 crc kubenswrapper[4797]: I0216 11:22:05.939967 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" event={"ID":"61f49bc4-6aee-43e9-8fc4-8380546e9da4","Type":"ContainerDied","Data":"609c30185db67b53b192ba29d48447e263134af8a9d9bb99c17a8cca6d66f4b3"} Feb 16 11:22:07 crc kubenswrapper[4797]: I0216 11:22:07.248835 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" Feb 16 11:22:07 crc kubenswrapper[4797]: I0216 11:22:07.405568 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61f49bc4-6aee-43e9-8fc4-8380546e9da4-util\") pod \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\" (UID: \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\") " Feb 16 11:22:07 crc kubenswrapper[4797]: I0216 11:22:07.405688 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61f49bc4-6aee-43e9-8fc4-8380546e9da4-bundle\") pod \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\" (UID: \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\") " Feb 16 11:22:07 crc kubenswrapper[4797]: I0216 11:22:07.405780 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg57l\" (UniqueName: \"kubernetes.io/projected/61f49bc4-6aee-43e9-8fc4-8380546e9da4-kube-api-access-gg57l\") pod \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\" (UID: \"61f49bc4-6aee-43e9-8fc4-8380546e9da4\") " Feb 16 11:22:07 crc kubenswrapper[4797]: I0216 11:22:07.406359 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f49bc4-6aee-43e9-8fc4-8380546e9da4-bundle" (OuterVolumeSpecName: "bundle") pod "61f49bc4-6aee-43e9-8fc4-8380546e9da4" (UID: "61f49bc4-6aee-43e9-8fc4-8380546e9da4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:22:07 crc kubenswrapper[4797]: I0216 11:22:07.414805 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61f49bc4-6aee-43e9-8fc4-8380546e9da4-kube-api-access-gg57l" (OuterVolumeSpecName: "kube-api-access-gg57l") pod "61f49bc4-6aee-43e9-8fc4-8380546e9da4" (UID: "61f49bc4-6aee-43e9-8fc4-8380546e9da4"). InnerVolumeSpecName "kube-api-access-gg57l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:22:07 crc kubenswrapper[4797]: I0216 11:22:07.422494 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61f49bc4-6aee-43e9-8fc4-8380546e9da4-util" (OuterVolumeSpecName: "util") pod "61f49bc4-6aee-43e9-8fc4-8380546e9da4" (UID: "61f49bc4-6aee-43e9-8fc4-8380546e9da4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:22:07 crc kubenswrapper[4797]: I0216 11:22:07.506722 4797 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/61f49bc4-6aee-43e9-8fc4-8380546e9da4-util\") on node \"crc\" DevicePath \"\"" Feb 16 11:22:07 crc kubenswrapper[4797]: I0216 11:22:07.506751 4797 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/61f49bc4-6aee-43e9-8fc4-8380546e9da4-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:22:07 crc kubenswrapper[4797]: I0216 11:22:07.506762 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg57l\" (UniqueName: \"kubernetes.io/projected/61f49bc4-6aee-43e9-8fc4-8380546e9da4-kube-api-access-gg57l\") on node \"crc\" DevicePath \"\"" Feb 16 11:22:07 crc kubenswrapper[4797]: I0216 11:22:07.955788 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" event={"ID":"61f49bc4-6aee-43e9-8fc4-8380546e9da4","Type":"ContainerDied","Data":"d69eefcb120fb7459d11370ccd9d60c77733f749a7a56ee3a8314a98afd5201c"} Feb 16 11:22:07 crc kubenswrapper[4797]: I0216 11:22:07.955832 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d69eefcb120fb7459d11370ccd9d60c77733f749a7a56ee3a8314a98afd5201c" Feb 16 11:22:07 crc kubenswrapper[4797]: I0216 11:22:07.955904 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87" Feb 16 11:22:10 crc kubenswrapper[4797]: I0216 11:22:10.753880 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-55dffc8d68-qpjsk"] Feb 16 11:22:10 crc kubenswrapper[4797]: E0216 11:22:10.754465 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f49bc4-6aee-43e9-8fc4-8380546e9da4" containerName="extract" Feb 16 11:22:10 crc kubenswrapper[4797]: I0216 11:22:10.754482 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f49bc4-6aee-43e9-8fc4-8380546e9da4" containerName="extract" Feb 16 11:22:10 crc kubenswrapper[4797]: E0216 11:22:10.754501 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f49bc4-6aee-43e9-8fc4-8380546e9da4" containerName="pull" Feb 16 11:22:10 crc kubenswrapper[4797]: I0216 11:22:10.754510 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f49bc4-6aee-43e9-8fc4-8380546e9da4" containerName="pull" Feb 16 11:22:10 crc kubenswrapper[4797]: E0216 11:22:10.754537 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61f49bc4-6aee-43e9-8fc4-8380546e9da4" containerName="util" Feb 16 11:22:10 crc kubenswrapper[4797]: I0216 11:22:10.754544 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="61f49bc4-6aee-43e9-8fc4-8380546e9da4" containerName="util" Feb 16 11:22:10 crc kubenswrapper[4797]: I0216 11:22:10.754698 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="61f49bc4-6aee-43e9-8fc4-8380546e9da4" containerName="extract" Feb 16 11:22:10 crc kubenswrapper[4797]: I0216 11:22:10.755251 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55dffc8d68-qpjsk" Feb 16 11:22:10 crc kubenswrapper[4797]: I0216 11:22:10.758409 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-sxcb4" Feb 16 11:22:10 crc kubenswrapper[4797]: I0216 11:22:10.778512 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55dffc8d68-qpjsk"] Feb 16 11:22:10 crc kubenswrapper[4797]: I0216 11:22:10.849794 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bj8g\" (UniqueName: \"kubernetes.io/projected/01351148-9ca7-4227-a44d-144584794e6f-kube-api-access-4bj8g\") pod \"openstack-operator-controller-init-55dffc8d68-qpjsk\" (UID: \"01351148-9ca7-4227-a44d-144584794e6f\") " pod="openstack-operators/openstack-operator-controller-init-55dffc8d68-qpjsk" Feb 16 11:22:10 crc kubenswrapper[4797]: I0216 11:22:10.951010 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bj8g\" (UniqueName: \"kubernetes.io/projected/01351148-9ca7-4227-a44d-144584794e6f-kube-api-access-4bj8g\") pod \"openstack-operator-controller-init-55dffc8d68-qpjsk\" (UID: \"01351148-9ca7-4227-a44d-144584794e6f\") " pod="openstack-operators/openstack-operator-controller-init-55dffc8d68-qpjsk" Feb 16 11:22:10 crc kubenswrapper[4797]: I0216 11:22:10.971705 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bj8g\" (UniqueName: \"kubernetes.io/projected/01351148-9ca7-4227-a44d-144584794e6f-kube-api-access-4bj8g\") pod \"openstack-operator-controller-init-55dffc8d68-qpjsk\" (UID: \"01351148-9ca7-4227-a44d-144584794e6f\") " pod="openstack-operators/openstack-operator-controller-init-55dffc8d68-qpjsk" Feb 16 11:22:11 crc kubenswrapper[4797]: I0216 11:22:11.081193 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-55dffc8d68-qpjsk" Feb 16 11:22:11 crc kubenswrapper[4797]: I0216 11:22:11.527390 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-55dffc8d68-qpjsk"] Feb 16 11:22:11 crc kubenswrapper[4797]: W0216 11:22:11.530504 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01351148_9ca7_4227_a44d_144584794e6f.slice/crio-5444348ebd3247130be61949f9ff64ff56e6e2663ddd4b072529572d214acef2 WatchSource:0}: Error finding container 5444348ebd3247130be61949f9ff64ff56e6e2663ddd4b072529572d214acef2: Status 404 returned error can't find the container with id 5444348ebd3247130be61949f9ff64ff56e6e2663ddd4b072529572d214acef2 Feb 16 11:22:12 crc kubenswrapper[4797]: I0216 11:22:12.000759 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55dffc8d68-qpjsk" event={"ID":"01351148-9ca7-4227-a44d-144584794e6f","Type":"ContainerStarted","Data":"5444348ebd3247130be61949f9ff64ff56e6e2663ddd4b072529572d214acef2"} Feb 16 11:22:17 crc kubenswrapper[4797]: I0216 11:22:17.031102 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-55dffc8d68-qpjsk" event={"ID":"01351148-9ca7-4227-a44d-144584794e6f","Type":"ContainerStarted","Data":"bf885a02654ec123706c59cea26aaeccf29dd4da94c7ae9188f08c63926e532e"} Feb 16 11:22:17 crc kubenswrapper[4797]: I0216 11:22:17.031509 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-55dffc8d68-qpjsk" Feb 16 11:22:21 crc kubenswrapper[4797]: I0216 11:22:21.084333 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-55dffc8d68-qpjsk" Feb 16 11:22:21 crc kubenswrapper[4797]: I0216 11:22:21.115370 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-55dffc8d68-qpjsk" podStartSLOduration=6.033750687 podStartE2EDuration="11.115345141s" podCreationTimestamp="2026-02-16 11:22:10 +0000 UTC" firstStartedPulling="2026-02-16 11:22:11.532832721 +0000 UTC m=+926.253017701" lastFinishedPulling="2026-02-16 11:22:16.614427175 +0000 UTC m=+931.334612155" observedRunningTime="2026-02-16 11:22:17.061072985 +0000 UTC m=+931.781257965" watchObservedRunningTime="2026-02-16 11:22:21.115345141 +0000 UTC m=+935.835530151" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.678539 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-5f2m4"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.679931 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5f2m4" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.682270 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-clncr" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.692707 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-dn2rf"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.693818 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dn2rf" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.696807 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-fz2qr" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.703121 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.703161 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.713762 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.714791 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.716700 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rr4fv" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.718370 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-dn2rf"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.723837 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.729889 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tddsr"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.730907 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tddsr" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.735516 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4pw7j" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.746598 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-5f2m4"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.764612 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tddsr"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.827787 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-llbkc"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.828094 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l64vh\" (UniqueName: \"kubernetes.io/projected/3439dee8-2272-41cc-8f20-1011e12202e8-kube-api-access-l64vh\") pod \"cinder-operator-controller-manager-5d946d989d-dn2rf\" (UID: \"3439dee8-2272-41cc-8f20-1011e12202e8\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dn2rf" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.828137 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmg98\" (UniqueName: \"kubernetes.io/projected/ff2de9ed-5f7c-4cf3-80f0-f0b12901438f-kube-api-access-jmg98\") pod \"barbican-operator-controller-manager-868647ff47-5f2m4\" (UID: \"ff2de9ed-5f7c-4cf3-80f0-f0b12901438f\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5f2m4" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.828184 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cqs4\" (UniqueName: \"kubernetes.io/projected/be2f5af9-52ca-4678-80c6-ad099ddbf8ff-kube-api-access-5cqs4\") pod \"designate-operator-controller-manager-6d8bf5c495-c2tk9\" (UID: \"be2f5af9-52ca-4678-80c6-ad099ddbf8ff\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.828303 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klcjq\" (UniqueName: \"kubernetes.io/projected/4b4c8cfc-5b6b-45cb-97f6-36c766aa6ad9-kube-api-access-klcjq\") pod \"glance-operator-controller-manager-77987464f4-tddsr\" (UID: \"4b4c8cfc-5b6b-45cb-97f6-36c766aa6ad9\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tddsr" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.828696 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-llbkc" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.836369 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-lbdcs" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.864282 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-llbkc"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.873187 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-gp7jv"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.874061 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gp7jv" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.876902 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-jb64g" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.899635 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-gp7jv"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.900848 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.901815 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.907066 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.907230 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-sq7kt" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.907313 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-bmz7r"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.908180 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bmz7r" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.912870 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-nlfqs" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.913826 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.915175 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.922153 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.922492 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-kkqx6" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.929280 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cqs4\" (UniqueName: \"kubernetes.io/projected/be2f5af9-52ca-4678-80c6-ad099ddbf8ff-kube-api-access-5cqs4\") pod \"designate-operator-controller-manager-6d8bf5c495-c2tk9\" (UID: \"be2f5af9-52ca-4678-80c6-ad099ddbf8ff\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.929354 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd5cf\" (UniqueName: \"kubernetes.io/projected/f2d64af8-fc1a-4a10-9e9d-ca65cb84dd0f-kube-api-access-zd5cf\") pod \"horizon-operator-controller-manager-5b9b8895d5-llbkc\" (UID: \"f2d64af8-fc1a-4a10-9e9d-ca65cb84dd0f\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-llbkc" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.929386 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klcjq\" (UniqueName: \"kubernetes.io/projected/4b4c8cfc-5b6b-45cb-97f6-36c766aa6ad9-kube-api-access-klcjq\") pod \"glance-operator-controller-manager-77987464f4-tddsr\" (UID: \"4b4c8cfc-5b6b-45cb-97f6-36c766aa6ad9\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tddsr" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.929423 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l64vh\" (UniqueName: \"kubernetes.io/projected/3439dee8-2272-41cc-8f20-1011e12202e8-kube-api-access-l64vh\") pod \"cinder-operator-controller-manager-5d946d989d-dn2rf\" (UID: \"3439dee8-2272-41cc-8f20-1011e12202e8\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dn2rf" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.929441 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmg98\" (UniqueName: \"kubernetes.io/projected/ff2de9ed-5f7c-4cf3-80f0-f0b12901438f-kube-api-access-jmg98\") pod \"barbican-operator-controller-manager-868647ff47-5f2m4\" (UID: \"ff2de9ed-5f7c-4cf3-80f0-f0b12901438f\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5f2m4" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.929844 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-bmz7r"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.939278 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.947443 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-jcjc8"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.948216 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jcjc8" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.959961 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-ctjqh"] Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.960756 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ctjqh" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.961023 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-z8nsk" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.966507 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-2tl42" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.977197 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cqs4\" (UniqueName: \"kubernetes.io/projected/be2f5af9-52ca-4678-80c6-ad099ddbf8ff-kube-api-access-5cqs4\") pod \"designate-operator-controller-manager-6d8bf5c495-c2tk9\" (UID: \"be2f5af9-52ca-4678-80c6-ad099ddbf8ff\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.978812 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmg98\" (UniqueName: \"kubernetes.io/projected/ff2de9ed-5f7c-4cf3-80f0-f0b12901438f-kube-api-access-jmg98\") pod \"barbican-operator-controller-manager-868647ff47-5f2m4\" (UID: \"ff2de9ed-5f7c-4cf3-80f0-f0b12901438f\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5f2m4" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.987830 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klcjq\" (UniqueName: \"kubernetes.io/projected/4b4c8cfc-5b6b-45cb-97f6-36c766aa6ad9-kube-api-access-klcjq\") pod \"glance-operator-controller-manager-77987464f4-tddsr\" (UID: \"4b4c8cfc-5b6b-45cb-97f6-36c766aa6ad9\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-tddsr" Feb 16 11:22:41 crc kubenswrapper[4797]: I0216 11:22:41.989249 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l64vh\" (UniqueName: \"kubernetes.io/projected/3439dee8-2272-41cc-8f20-1011e12202e8-kube-api-access-l64vh\") pod \"cinder-operator-controller-manager-5d946d989d-dn2rf\" (UID: \"3439dee8-2272-41cc-8f20-1011e12202e8\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dn2rf" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.000933 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-jcjc8"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.009972 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dn2rf" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.016639 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-ctjqh"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.025955 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92zcz"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.026784 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92zcz" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.027230 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.028736 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-bqgdp" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.030382 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q269b\" (UniqueName: \"kubernetes.io/projected/e6757076-86f7-48aa-87b3-27d275221210-kube-api-access-q269b\") pod \"heat-operator-controller-manager-69f49c598c-gp7jv\" (UID: \"e6757076-86f7-48aa-87b3-27d275221210\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gp7jv" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.030416 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert\") pod \"infra-operator-controller-manager-79d975b745-l6xg9\" (UID: \"c5013d9b-4630-450f-80bf-312fbc3256ec\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.030444 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsbqr\" (UniqueName: \"kubernetes.io/projected/c5013d9b-4630-450f-80bf-312fbc3256ec-kube-api-access-tsbqr\") pod \"infra-operator-controller-manager-79d975b745-l6xg9\" (UID: \"c5013d9b-4630-450f-80bf-312fbc3256ec\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.030463 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdmnr\" (UniqueName: \"kubernetes.io/projected/5ec1f813-5b71-4f97-919a-0414a1a7cb73-kube-api-access-cdmnr\") pod \"mariadb-operator-controller-manager-6994f66f48-ctjqh\" (UID: \"5ec1f813-5b71-4f97-919a-0414a1a7cb73\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ctjqh" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.030485 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb64c\" (UniqueName: \"kubernetes.io/projected/931bff49-5f65-49a0-8dab-c1b5858ec958-kube-api-access-zb64c\") pod \"keystone-operator-controller-manager-b4d948c87-c4rfb\" (UID: \"931bff49-5f65-49a0-8dab-c1b5858ec958\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.030509 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqf7s\" (UniqueName: \"kubernetes.io/projected/9e8f1871-1ed7-4ef9-8c88-901a64f13ccd-kube-api-access-kqf7s\") pod \"manila-operator-controller-manager-54f6768c69-jcjc8\" (UID: \"9e8f1871-1ed7-4ef9-8c88-901a64f13ccd\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jcjc8" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.030570 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd5cf\" (UniqueName: \"kubernetes.io/projected/f2d64af8-fc1a-4a10-9e9d-ca65cb84dd0f-kube-api-access-zd5cf\") pod \"horizon-operator-controller-manager-5b9b8895d5-llbkc\" (UID: \"f2d64af8-fc1a-4a10-9e9d-ca65cb84dd0f\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-llbkc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.030601 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx7vf\" (UniqueName: \"kubernetes.io/projected/0c242ffd-e8a4-4f19-80e9-957c31876eb2-kube-api-access-wx7vf\") pod \"ironic-operator-controller-manager-554564d7fc-bmz7r\" (UID: \"0c242ffd-e8a4-4f19-80e9-957c31876eb2\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bmz7r" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.034031 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92zcz"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.056878 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-rcgvv"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.057639 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rcgvv" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.062375 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tddsr" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.071966 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-jz86g" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.081743 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.082620 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.089211 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd5cf\" (UniqueName: \"kubernetes.io/projected/f2d64af8-fc1a-4a10-9e9d-ca65cb84dd0f-kube-api-access-zd5cf\") pod \"horizon-operator-controller-manager-5b9b8895d5-llbkc\" (UID: \"f2d64af8-fc1a-4a10-9e9d-ca65cb84dd0f\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-llbkc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.092732 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zcfrd" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.102032 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-rcgvv"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.120632 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.137404 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69s6z\" (UniqueName: \"kubernetes.io/projected/669a405d-b513-461b-9d3d-fe7938e08dec-kube-api-access-69s6z\") pod \"octavia-operator-controller-manager-69f8888797-9tbc9\" (UID: \"669a405d-b513-461b-9d3d-fe7938e08dec\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.137496 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd2rw\" (UniqueName: \"kubernetes.io/projected/0b0f4d9d-f30c-4981-87cf-1ea78972c784-kube-api-access-jd2rw\") pod \"neutron-operator-controller-manager-64ddbf8bb-92zcz\" (UID: \"0b0f4d9d-f30c-4981-87cf-1ea78972c784\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92zcz" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.137563 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx7vf\" (UniqueName: \"kubernetes.io/projected/0c242ffd-e8a4-4f19-80e9-957c31876eb2-kube-api-access-wx7vf\") pod \"ironic-operator-controller-manager-554564d7fc-bmz7r\" (UID: \"0c242ffd-e8a4-4f19-80e9-957c31876eb2\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bmz7r" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.137632 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x2ds\" (UniqueName: \"kubernetes.io/projected/60624e90-f529-495b-b523-fda5525b3404-kube-api-access-5x2ds\") pod \"nova-operator-controller-manager-567668f5cf-rcgvv\" (UID: \"60624e90-f529-495b-b523-fda5525b3404\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rcgvv" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.137664 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q269b\" (UniqueName: \"kubernetes.io/projected/e6757076-86f7-48aa-87b3-27d275221210-kube-api-access-q269b\") pod \"heat-operator-controller-manager-69f49c598c-gp7jv\" (UID: \"e6757076-86f7-48aa-87b3-27d275221210\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gp7jv" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.137696 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert\") pod \"infra-operator-controller-manager-79d975b745-l6xg9\" (UID: \"c5013d9b-4630-450f-80bf-312fbc3256ec\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.137725 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsbqr\" (UniqueName: \"kubernetes.io/projected/c5013d9b-4630-450f-80bf-312fbc3256ec-kube-api-access-tsbqr\") pod \"infra-operator-controller-manager-79d975b745-l6xg9\" (UID: \"c5013d9b-4630-450f-80bf-312fbc3256ec\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.137753 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdmnr\" (UniqueName: \"kubernetes.io/projected/5ec1f813-5b71-4f97-919a-0414a1a7cb73-kube-api-access-cdmnr\") pod \"mariadb-operator-controller-manager-6994f66f48-ctjqh\" (UID: \"5ec1f813-5b71-4f97-919a-0414a1a7cb73\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ctjqh" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.137787 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb64c\" (UniqueName: \"kubernetes.io/projected/931bff49-5f65-49a0-8dab-c1b5858ec958-kube-api-access-zb64c\") pod \"keystone-operator-controller-manager-b4d948c87-c4rfb\" (UID: \"931bff49-5f65-49a0-8dab-c1b5858ec958\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.137847 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqf7s\" (UniqueName: \"kubernetes.io/projected/9e8f1871-1ed7-4ef9-8c88-901a64f13ccd-kube-api-access-kqf7s\") pod \"manila-operator-controller-manager-54f6768c69-jcjc8\" (UID: \"9e8f1871-1ed7-4ef9-8c88-901a64f13ccd\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jcjc8" Feb 16 11:22:42 crc kubenswrapper[4797]: E0216 11:22:42.138817 4797 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 11:22:42 crc kubenswrapper[4797]: E0216 11:22:42.138872 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert podName:c5013d9b-4630-450f-80bf-312fbc3256ec nodeName:}" failed. No retries permitted until 2026-02-16 11:22:42.638852576 +0000 UTC m=+957.359037556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert") pod "infra-operator-controller-manager-79d975b745-l6xg9" (UID: "c5013d9b-4630-450f-80bf-312fbc3256ec") : secret "infra-operator-webhook-server-cert" not found Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.159600 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.160680 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.180410 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx7vf\" (UniqueName: \"kubernetes.io/projected/0c242ffd-e8a4-4f19-80e9-957c31876eb2-kube-api-access-wx7vf\") pod \"ironic-operator-controller-manager-554564d7fc-bmz7r\" (UID: \"0c242ffd-e8a4-4f19-80e9-957c31876eb2\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bmz7r" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.180412 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-77kc8" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.181627 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-llbkc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.191091 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.191912 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.199876 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsbqr\" (UniqueName: \"kubernetes.io/projected/c5013d9b-4630-450f-80bf-312fbc3256ec-kube-api-access-tsbqr\") pod \"infra-operator-controller-manager-79d975b745-l6xg9\" (UID: \"c5013d9b-4630-450f-80bf-312fbc3256ec\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.200297 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q269b\" (UniqueName: \"kubernetes.io/projected/e6757076-86f7-48aa-87b3-27d275221210-kube-api-access-q269b\") pod \"heat-operator-controller-manager-69f49c598c-gp7jv\" (UID: \"e6757076-86f7-48aa-87b3-27d275221210\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gp7jv" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.200696 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb64c\" (UniqueName: \"kubernetes.io/projected/931bff49-5f65-49a0-8dab-c1b5858ec958-kube-api-access-zb64c\") pod \"keystone-operator-controller-manager-b4d948c87-c4rfb\" (UID: \"931bff49-5f65-49a0-8dab-c1b5858ec958\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.201854 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.202617 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.206128 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqf7s\" (UniqueName: \"kubernetes.io/projected/9e8f1871-1ed7-4ef9-8c88-901a64f13ccd-kube-api-access-kqf7s\") pod \"manila-operator-controller-manager-54f6768c69-jcjc8\" (UID: \"9e8f1871-1ed7-4ef9-8c88-901a64f13ccd\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jcjc8" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.206879 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-9l6n2" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.207467 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdmnr\" (UniqueName: \"kubernetes.io/projected/5ec1f813-5b71-4f97-919a-0414a1a7cb73-kube-api-access-cdmnr\") pod \"mariadb-operator-controller-manager-6994f66f48-ctjqh\" (UID: \"5ec1f813-5b71-4f97-919a-0414a1a7cb73\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ctjqh" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.208350 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2ctb6" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.219904 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.242611 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bmz7r" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.243307 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg\" (UID: \"4521c529-8b50-4fd0-8696-b1207798e1f5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.243359 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69s6z\" (UniqueName: \"kubernetes.io/projected/669a405d-b513-461b-9d3d-fe7938e08dec-kube-api-access-69s6z\") pod \"octavia-operator-controller-manager-69f8888797-9tbc9\" (UID: \"669a405d-b513-461b-9d3d-fe7938e08dec\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.243386 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bs2r\" (UniqueName: \"kubernetes.io/projected/4521c529-8b50-4fd0-8696-b1207798e1f5-kube-api-access-6bs2r\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg\" (UID: \"4521c529-8b50-4fd0-8696-b1207798e1f5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.243413 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krm4k\" (UniqueName: \"kubernetes.io/projected/7745cf21-caab-4866-99f1-f2d819e779d3-kube-api-access-krm4k\") pod \"ovn-operator-controller-manager-d44cf6b75-q54xh\" (UID: \"7745cf21-caab-4866-99f1-f2d819e779d3\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.243436 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s99zv\" (UniqueName: \"kubernetes.io/projected/49b234d6-478d-44ec-9164-9482c3242ea2-kube-api-access-s99zv\") pod \"placement-operator-controller-manager-8497b45c89-czcgn\" (UID: \"49b234d6-478d-44ec-9164-9482c3242ea2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.243463 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd2rw\" (UniqueName: \"kubernetes.io/projected/0b0f4d9d-f30c-4981-87cf-1ea78972c784-kube-api-access-jd2rw\") pod \"neutron-operator-controller-manager-64ddbf8bb-92zcz\" (UID: \"0b0f4d9d-f30c-4981-87cf-1ea78972c784\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92zcz" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.243496 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x2ds\" (UniqueName: \"kubernetes.io/projected/60624e90-f529-495b-b523-fda5525b3404-kube-api-access-5x2ds\") pod \"nova-operator-controller-manager-567668f5cf-rcgvv\" (UID: \"60624e90-f529-495b-b523-fda5525b3404\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rcgvv" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.250704 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.260746 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.266424 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69s6z\" (UniqueName: \"kubernetes.io/projected/669a405d-b513-461b-9d3d-fe7938e08dec-kube-api-access-69s6z\") pod \"octavia-operator-controller-manager-69f8888797-9tbc9\" (UID: \"669a405d-b513-461b-9d3d-fe7938e08dec\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.271778 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jcjc8" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.272677 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5x2ds\" (UniqueName: \"kubernetes.io/projected/60624e90-f529-495b-b523-fda5525b3404-kube-api-access-5x2ds\") pod \"nova-operator-controller-manager-567668f5cf-rcgvv\" (UID: \"60624e90-f529-495b-b523-fda5525b3404\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rcgvv" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.293671 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd2rw\" (UniqueName: \"kubernetes.io/projected/0b0f4d9d-f30c-4981-87cf-1ea78972c784-kube-api-access-jd2rw\") pod \"neutron-operator-controller-manager-64ddbf8bb-92zcz\" (UID: \"0b0f4d9d-f30c-4981-87cf-1ea78972c784\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92zcz" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.295915 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5f2m4" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.305931 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.348492 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg\" (UID: \"4521c529-8b50-4fd0-8696-b1207798e1f5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.348907 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bs2r\" (UniqueName: \"kubernetes.io/projected/4521c529-8b50-4fd0-8696-b1207798e1f5-kube-api-access-6bs2r\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg\" (UID: \"4521c529-8b50-4fd0-8696-b1207798e1f5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.348994 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krm4k\" (UniqueName: \"kubernetes.io/projected/7745cf21-caab-4866-99f1-f2d819e779d3-kube-api-access-krm4k\") pod \"ovn-operator-controller-manager-d44cf6b75-q54xh\" (UID: \"7745cf21-caab-4866-99f1-f2d819e779d3\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.349068 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s99zv\" (UniqueName: \"kubernetes.io/projected/49b234d6-478d-44ec-9164-9482c3242ea2-kube-api-access-s99zv\") pod \"placement-operator-controller-manager-8497b45c89-czcgn\" (UID: \"49b234d6-478d-44ec-9164-9482c3242ea2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn" Feb 16 11:22:42 crc kubenswrapper[4797]: E0216 11:22:42.349559 4797 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 11:22:42 crc kubenswrapper[4797]: E0216 11:22:42.349624 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert podName:4521c529-8b50-4fd0-8696-b1207798e1f5 nodeName:}" failed. No retries permitted until 2026-02-16 11:22:42.849608464 +0000 UTC m=+957.569793444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" (UID: "4521c529-8b50-4fd0-8696-b1207798e1f5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.366649 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.380756 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krm4k\" (UniqueName: \"kubernetes.io/projected/7745cf21-caab-4866-99f1-f2d819e779d3-kube-api-access-krm4k\") pod \"ovn-operator-controller-manager-d44cf6b75-q54xh\" (UID: \"7745cf21-caab-4866-99f1-f2d819e779d3\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.384144 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.387264 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s99zv\" (UniqueName: \"kubernetes.io/projected/49b234d6-478d-44ec-9164-9482c3242ea2-kube-api-access-s99zv\") pod \"placement-operator-controller-manager-8497b45c89-czcgn\" (UID: \"49b234d6-478d-44ec-9164-9482c3242ea2\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.395409 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-gclqk" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.397264 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.404834 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bs2r\" (UniqueName: \"kubernetes.io/projected/4521c529-8b50-4fd0-8696-b1207798e1f5-kube-api-access-6bs2r\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg\" (UID: \"4521c529-8b50-4fd0-8696-b1207798e1f5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.420920 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.426223 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ctjqh" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.430374 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.431315 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.432877 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-828lk" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.452744 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nxnz\" (UniqueName: \"kubernetes.io/projected/ae9b635f-ae0c-4d62-9860-a9817b6d668e-kube-api-access-6nxnz\") pod \"swift-operator-controller-manager-68f46476f-pvbnl\" (UID: \"ae9b635f-ae0c-4d62-9860-a9817b6d668e\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.461633 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.475971 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-rk7qz"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.477931 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-rk7qz" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.488828 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-bpz9k" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.489831 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92zcz" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.489842 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-rk7qz"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.494302 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gp7jv" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.520937 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.521845 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.525807 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-b6q7h" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.540528 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.554494 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rcgvv" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.555892 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nxnz\" (UniqueName: \"kubernetes.io/projected/ae9b635f-ae0c-4d62-9860-a9817b6d668e-kube-api-access-6nxnz\") pod \"swift-operator-controller-manager-68f46476f-pvbnl\" (UID: \"ae9b635f-ae0c-4d62-9860-a9817b6d668e\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.555986 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzh5s\" (UniqueName: \"kubernetes.io/projected/c05e0068-d50b-459b-86ab-b076230093b6-kube-api-access-zzh5s\") pod \"test-operator-controller-manager-7866795846-rk7qz\" (UID: \"c05e0068-d50b-459b-86ab-b076230093b6\") " pod="openstack-operators/test-operator-controller-manager-7866795846-rk7qz" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.556040 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq7dp\" (UniqueName: \"kubernetes.io/projected/e88a6189-ad47-438a-baab-3dcc5d781126-kube-api-access-wq7dp\") pod \"telemetry-operator-controller-manager-64b85768bb-4k7fc\" (UID: \"e88a6189-ad47-438a-baab-3dcc5d781126\") " pod="openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.571297 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nxnz\" (UniqueName: \"kubernetes.io/projected/ae9b635f-ae0c-4d62-9860-a9817b6d668e-kube-api-access-6nxnz\") pod \"swift-operator-controller-manager-68f46476f-pvbnl\" (UID: \"ae9b635f-ae0c-4d62-9860-a9817b6d668e\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.586219 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.592788 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.595807 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.597054 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.600185 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.600202 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4lfhs" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.600311 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.607330 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.613879 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.635350 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.636187 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.639910 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-xgrp5" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.642274 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.662354 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.662438 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert\") pod \"infra-operator-controller-manager-79d975b745-l6xg9\" (UID: \"c5013d9b-4630-450f-80bf-312fbc3256ec\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.662476 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.662534 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzh5s\" (UniqueName: \"kubernetes.io/projected/c05e0068-d50b-459b-86ab-b076230093b6-kube-api-access-zzh5s\") pod \"test-operator-controller-manager-7866795846-rk7qz\" (UID: \"c05e0068-d50b-459b-86ab-b076230093b6\") " pod="openstack-operators/test-operator-controller-manager-7866795846-rk7qz" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.662562 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b9lj\" (UniqueName: \"kubernetes.io/projected/ca037f0f-8b30-4f77-b039-b4d92368af5a-kube-api-access-2b9lj\") pod \"watcher-operator-controller-manager-5db88f68c-vxl6t\" (UID: \"ca037f0f-8b30-4f77-b039-b4d92368af5a\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.662628 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj942\" (UniqueName: \"kubernetes.io/projected/40a83645-f1ce-4393-90d1-7fb9d2144bfa-kube-api-access-xj942\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.662705 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq7dp\" (UniqueName: \"kubernetes.io/projected/e88a6189-ad47-438a-baab-3dcc5d781126-kube-api-access-wq7dp\") pod \"telemetry-operator-controller-manager-64b85768bb-4k7fc\" (UID: \"e88a6189-ad47-438a-baab-3dcc5d781126\") " pod="openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc" Feb 16 11:22:42 crc kubenswrapper[4797]: E0216 11:22:42.663481 4797 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 11:22:42 crc kubenswrapper[4797]: E0216 11:22:42.663605 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert podName:c5013d9b-4630-450f-80bf-312fbc3256ec nodeName:}" failed. No retries permitted until 2026-02-16 11:22:43.663556811 +0000 UTC m=+958.383741791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert") pod "infra-operator-controller-manager-79d975b745-l6xg9" (UID: "c5013d9b-4630-450f-80bf-312fbc3256ec") : secret "infra-operator-webhook-server-cert" not found Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.688037 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq7dp\" (UniqueName: \"kubernetes.io/projected/e88a6189-ad47-438a-baab-3dcc5d781126-kube-api-access-wq7dp\") pod \"telemetry-operator-controller-manager-64b85768bb-4k7fc\" (UID: \"e88a6189-ad47-438a-baab-3dcc5d781126\") " pod="openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.691659 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzh5s\" (UniqueName: \"kubernetes.io/projected/c05e0068-d50b-459b-86ab-b076230093b6-kube-api-access-zzh5s\") pod \"test-operator-controller-manager-7866795846-rk7qz\" (UID: \"c05e0068-d50b-459b-86ab-b076230093b6\") " pod="openstack-operators/test-operator-controller-manager-7866795846-rk7qz" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.720348 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.730948 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.746634 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-rk7qz" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.763505 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b9lj\" (UniqueName: \"kubernetes.io/projected/ca037f0f-8b30-4f77-b039-b4d92368af5a-kube-api-access-2b9lj\") pod \"watcher-operator-controller-manager-5db88f68c-vxl6t\" (UID: \"ca037f0f-8b30-4f77-b039-b4d92368af5a\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.763570 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj942\" (UniqueName: \"kubernetes.io/projected/40a83645-f1ce-4393-90d1-7fb9d2144bfa-kube-api-access-xj942\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.763651 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8266\" (UniqueName: \"kubernetes.io/projected/2322e8ef-0322-48a9-85fc-95345d68dea3-kube-api-access-v8266\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c9frn\" (UID: \"2322e8ef-0322-48a9-85fc-95345d68dea3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.763695 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.763732 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:42 crc kubenswrapper[4797]: E0216 11:22:42.763855 4797 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 11:22:42 crc kubenswrapper[4797]: E0216 11:22:42.763900 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs podName:40a83645-f1ce-4393-90d1-7fb9d2144bfa nodeName:}" failed. No retries permitted until 2026-02-16 11:22:43.263884522 +0000 UTC m=+957.984069502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs") pod "openstack-operator-controller-manager-6b65fbbb9f-7rjfc" (UID: "40a83645-f1ce-4393-90d1-7fb9d2144bfa") : secret "metrics-server-cert" not found Feb 16 11:22:42 crc kubenswrapper[4797]: E0216 11:22:42.764378 4797 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 11:22:42 crc kubenswrapper[4797]: E0216 11:22:42.764403 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs podName:40a83645-f1ce-4393-90d1-7fb9d2144bfa nodeName:}" failed. No retries permitted until 2026-02-16 11:22:43.264395376 +0000 UTC m=+957.984580356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs") pod "openstack-operator-controller-manager-6b65fbbb9f-7rjfc" (UID: "40a83645-f1ce-4393-90d1-7fb9d2144bfa") : secret "webhook-server-cert" not found Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.796898 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj942\" (UniqueName: \"kubernetes.io/projected/40a83645-f1ce-4393-90d1-7fb9d2144bfa-kube-api-access-xj942\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.799304 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b9lj\" (UniqueName: \"kubernetes.io/projected/ca037f0f-8b30-4f77-b039-b4d92368af5a-kube-api-access-2b9lj\") pod \"watcher-operator-controller-manager-5db88f68c-vxl6t\" (UID: \"ca037f0f-8b30-4f77-b039-b4d92368af5a\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.816232 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.867359 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg\" (UID: \"4521c529-8b50-4fd0-8696-b1207798e1f5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.867516 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8266\" (UniqueName: \"kubernetes.io/projected/2322e8ef-0322-48a9-85fc-95345d68dea3-kube-api-access-v8266\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c9frn\" (UID: \"2322e8ef-0322-48a9-85fc-95345d68dea3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn" Feb 16 11:22:42 crc kubenswrapper[4797]: E0216 11:22:42.868328 4797 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 11:22:42 crc kubenswrapper[4797]: E0216 11:22:42.868391 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert podName:4521c529-8b50-4fd0-8696-b1207798e1f5 nodeName:}" failed. No retries permitted until 2026-02-16 11:22:43.868373187 +0000 UTC m=+958.588558167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" (UID: "4521c529-8b50-4fd0-8696-b1207798e1f5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.886700 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8266\" (UniqueName: \"kubernetes.io/projected/2322e8ef-0322-48a9-85fc-95345d68dea3-kube-api-access-v8266\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c9frn\" (UID: \"2322e8ef-0322-48a9-85fc-95345d68dea3\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn" Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.927030 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9"] Feb 16 11:22:42 crc kubenswrapper[4797]: I0216 11:22:42.933268 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-dn2rf"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.144834 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn" Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.257475 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9" event={"ID":"be2f5af9-52ca-4678-80c6-ad099ddbf8ff","Type":"ContainerStarted","Data":"84ddee21b62b70305ac58822bd5ffe1178bd5db8edde247585bc3277352ba309"} Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.258073 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dn2rf" event={"ID":"3439dee8-2272-41cc-8f20-1011e12202e8","Type":"ContainerStarted","Data":"d767413300c177e6c6a9e0f354879b21dcad154fea24d6a97061cda1935e5091"} Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.278036 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.278106 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.278198 4797 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.278244 4797 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.278292 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs podName:40a83645-f1ce-4393-90d1-7fb9d2144bfa nodeName:}" failed. No retries permitted until 2026-02-16 11:22:44.278272926 +0000 UTC m=+958.998457906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs") pod "openstack-operator-controller-manager-6b65fbbb9f-7rjfc" (UID: "40a83645-f1ce-4393-90d1-7fb9d2144bfa") : secret "metrics-server-cert" not found Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.278309 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs podName:40a83645-f1ce-4393-90d1-7fb9d2144bfa nodeName:}" failed. No retries permitted until 2026-02-16 11:22:44.278302077 +0000 UTC m=+958.998487057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs") pod "openstack-operator-controller-manager-6b65fbbb9f-7rjfc" (UID: "40a83645-f1ce-4393-90d1-7fb9d2144bfa") : secret "webhook-server-cert" not found Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.407310 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-jcjc8"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.415565 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.464727 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-tddsr"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.480890 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-5f2m4"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.507297 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-llbkc"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.518792 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-rcgvv"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.528339 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-bmz7r"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.532902 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-ctjqh"] Feb 16 11:22:43 crc kubenswrapper[4797]: W0216 11:22:43.533469 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6757076_86f7_48aa_87b3_27d275221210.slice/crio-beb7c0b6bcff55839917b8722b428960fc5400b02c524732f412356e9a2e6f00 WatchSource:0}: Error finding container beb7c0b6bcff55839917b8722b428960fc5400b02c524732f412356e9a2e6f00: Status 404 returned error can't find the container with id beb7c0b6bcff55839917b8722b428960fc5400b02c524732f412356e9a2e6f00 Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.537518 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92zcz"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.543021 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-gp7jv"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.632823 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.679093 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.684540 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9"] Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.698994 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-69s6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-69f8888797-9tbc9_openstack-operators(669a405d-b513-461b-9d3d-fe7938e08dec): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.700376 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9" podUID="669a405d-b513-461b-9d3d-fe7938e08dec" Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.701182 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s99zv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-8497b45c89-czcgn_openstack-operators(49b234d6-478d-44ec-9164-9482c3242ea2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.703020 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn" podUID="49b234d6-478d-44ec-9164-9482c3242ea2" Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.706315 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert\") pod \"infra-operator-controller-manager-79d975b745-l6xg9\" (UID: \"c5013d9b-4630-450f-80bf-312fbc3256ec\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.706565 4797 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.706639 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert podName:c5013d9b-4630-450f-80bf-312fbc3256ec nodeName:}" failed. No retries permitted until 2026-02-16 11:22:45.706620848 +0000 UTC m=+960.426805838 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert") pod "infra-operator-controller-manager-79d975b745-l6xg9" (UID: "c5013d9b-4630-450f-80bf-312fbc3256ec") : secret "infra-operator-webhook-server-cert" not found Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.707558 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.720003 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t"] Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.720979 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krm4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-d44cf6b75-q54xh_openstack-operators(7745cf21-caab-4866-99f1-f2d819e779d3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.722202 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh" podUID="7745cf21-caab-4866-99f1-f2d819e779d3" Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.753274 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn"] Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.759404 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl"] Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.784954 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2b9lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-vxl6t_openstack-operators(ca037f0f-8b30-4f77-b039-b4d92368af5a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.785064 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6nxnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68f46476f-pvbnl_openstack-operators(ae9b635f-ae0c-4d62-9860-a9817b6d668e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.786632 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v8266,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-c9frn_openstack-operators(2322e8ef-0322-48a9-85fc-95345d68dea3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.786698 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl" podUID="ae9b635f-ae0c-4d62-9860-a9817b6d668e" Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.786763 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t" podUID="ca037f0f-8b30-4f77-b039-b4d92368af5a" Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.791638 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn" podUID="2322e8ef-0322-48a9-85fc-95345d68dea3" Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.820916 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-rk7qz"] Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.826532 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zzh5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7866795846-rk7qz_openstack-operators(c05e0068-d50b-459b-86ab-b076230093b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.827734 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-7866795846-rk7qz" podUID="c05e0068-d50b-459b-86ab-b076230093b6" Feb 16 11:22:43 crc kubenswrapper[4797]: I0216 11:22:43.909687 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg\" (UID: \"4521c529-8b50-4fd0-8696-b1207798e1f5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.909883 4797 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 11:22:43 crc kubenswrapper[4797]: E0216 11:22:43.909961 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert podName:4521c529-8b50-4fd0-8696-b1207798e1f5 nodeName:}" failed. No retries permitted until 2026-02-16 11:22:45.909942233 +0000 UTC m=+960.630127213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" (UID: "4521c529-8b50-4fd0-8696-b1207798e1f5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.269000 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl" event={"ID":"ae9b635f-ae0c-4d62-9860-a9817b6d668e","Type":"ContainerStarted","Data":"56ca0938bb5e7b0d8ebd76f10acad11ea20162566eb12d3bed462e23d2a9983f"} Feb 16 11:22:44 crc kubenswrapper[4797]: E0216 11:22:44.270604 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl" podUID="ae9b635f-ae0c-4d62-9860-a9817b6d668e" Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.271845 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jcjc8" event={"ID":"9e8f1871-1ed7-4ef9-8c88-901a64f13ccd","Type":"ContainerStarted","Data":"3af652c785fce760b409eb7ab2dfae15ae18c4e20924728060313c3160c18736"} Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.276399 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn" event={"ID":"49b234d6-478d-44ec-9164-9482c3242ea2","Type":"ContainerStarted","Data":"25e001b1d0b447967ac3557446e16292d30e6ad6b77e6ee6912ccae22114ed1a"} Feb 16 11:22:44 crc kubenswrapper[4797]: E0216 11:22:44.280743 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn" podUID="49b234d6-478d-44ec-9164-9482c3242ea2" Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.284422 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-llbkc" event={"ID":"f2d64af8-fc1a-4a10-9e9d-ca65cb84dd0f","Type":"ContainerStarted","Data":"3e9d0c91b7114265ae9ced71d858591a74482565636130d78c7ce7bc2f6259ed"} Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.288104 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9" event={"ID":"669a405d-b513-461b-9d3d-fe7938e08dec","Type":"ContainerStarted","Data":"b137f9a48763e83367258f4c09e64c380a5017bb3f3dfcd5819e56845c0ec18e"} Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.290143 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gp7jv" event={"ID":"e6757076-86f7-48aa-87b3-27d275221210","Type":"ContainerStarted","Data":"beb7c0b6bcff55839917b8722b428960fc5400b02c524732f412356e9a2e6f00"} Feb 16 11:22:44 crc kubenswrapper[4797]: E0216 11:22:44.290250 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9" podUID="669a405d-b513-461b-9d3d-fe7938e08dec" Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.291316 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92zcz" event={"ID":"0b0f4d9d-f30c-4981-87cf-1ea78972c784","Type":"ContainerStarted","Data":"9869ce30eba1fa192bf06a60bffef81d4e40414305495c5e12e1d2a9aa37954f"} Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.293851 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb" event={"ID":"931bff49-5f65-49a0-8dab-c1b5858ec958","Type":"ContainerStarted","Data":"4216e89cb4e8fe65b8ac229c678719b181acbcb072ba1cd32c2fd15392333d71"} Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.296689 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn" event={"ID":"2322e8ef-0322-48a9-85fc-95345d68dea3","Type":"ContainerStarted","Data":"f600a8bfc3928132461c50ef03ce06ebe3960449bcc7f7ccb85f05eaf26783ad"} Feb 16 11:22:44 crc kubenswrapper[4797]: E0216 11:22:44.298559 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn" podUID="2322e8ef-0322-48a9-85fc-95345d68dea3" Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.300174 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t" event={"ID":"ca037f0f-8b30-4f77-b039-b4d92368af5a","Type":"ContainerStarted","Data":"86e71da5d4948b79bd3260380dce3a097950c46699109b2e56902fcb8bf413f4"} Feb 16 11:22:44 crc kubenswrapper[4797]: E0216 11:22:44.301895 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t" podUID="ca037f0f-8b30-4f77-b039-b4d92368af5a" Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.303106 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc" event={"ID":"e88a6189-ad47-438a-baab-3dcc5d781126","Type":"ContainerStarted","Data":"f965918b12e8285ad67b907e4d6a7be32ebcaf9799e3653962738742caea9e14"} Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.305622 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh" event={"ID":"7745cf21-caab-4866-99f1-f2d819e779d3","Type":"ContainerStarted","Data":"b938f265b451956123b09faf985855c92af98171c24996e50c3b8ca703e1cc4f"} Feb 16 11:22:44 crc kubenswrapper[4797]: E0216 11:22:44.311367 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh" podUID="7745cf21-caab-4866-99f1-f2d819e779d3" Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.316066 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-rk7qz" event={"ID":"c05e0068-d50b-459b-86ab-b076230093b6","Type":"ContainerStarted","Data":"c91f779f8030e89689464f022ec6b10f23a5f5169fef7b5e10b78f52472cbe07"} Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.316621 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.316700 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:44 crc kubenswrapper[4797]: E0216 11:22:44.316926 4797 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 11:22:44 crc kubenswrapper[4797]: E0216 11:22:44.316980 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs podName:40a83645-f1ce-4393-90d1-7fb9d2144bfa nodeName:}" failed. No retries permitted until 2026-02-16 11:22:46.316963864 +0000 UTC m=+961.037148844 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs") pod "openstack-operator-controller-manager-6b65fbbb9f-7rjfc" (UID: "40a83645-f1ce-4393-90d1-7fb9d2144bfa") : secret "metrics-server-cert" not found Feb 16 11:22:44 crc kubenswrapper[4797]: E0216 11:22:44.316982 4797 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 11:22:44 crc kubenswrapper[4797]: E0216 11:22:44.317185 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs podName:40a83645-f1ce-4393-90d1-7fb9d2144bfa nodeName:}" failed. No retries permitted until 2026-02-16 11:22:46.317163109 +0000 UTC m=+961.037348159 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs") pod "openstack-operator-controller-manager-6b65fbbb9f-7rjfc" (UID: "40a83645-f1ce-4393-90d1-7fb9d2144bfa") : secret "webhook-server-cert" not found Feb 16 11:22:44 crc kubenswrapper[4797]: E0216 11:22:44.319077 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-rk7qz" podUID="c05e0068-d50b-459b-86ab-b076230093b6" Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.319391 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tddsr" event={"ID":"4b4c8cfc-5b6b-45cb-97f6-36c766aa6ad9","Type":"ContainerStarted","Data":"def367ac16667f48ed5277661808649b337f3cec01ed101c37018355838eb45c"} Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.320276 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ctjqh" event={"ID":"5ec1f813-5b71-4f97-919a-0414a1a7cb73","Type":"ContainerStarted","Data":"441de87a27a389a1d7d8e17150b8129e8a0be1cd2fe2b9a829296b8809acb905"} Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.325649 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rcgvv" event={"ID":"60624e90-f529-495b-b523-fda5525b3404","Type":"ContainerStarted","Data":"29f732f59b18cca1f3145aa65db67e5f30b9899a3228dce6d2824cc65f312d5f"} Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.333427 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bmz7r" event={"ID":"0c242ffd-e8a4-4f19-80e9-957c31876eb2","Type":"ContainerStarted","Data":"1ff23275693174a1789f1d5aa6da11159f66bd4238b7c80beeaff30c23d678fa"} Feb 16 11:22:44 crc kubenswrapper[4797]: I0216 11:22:44.338560 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5f2m4" event={"ID":"ff2de9ed-5f7c-4cf3-80f0-f0b12901438f","Type":"ContainerStarted","Data":"8f2b89082b08684bd8f309337df9fcf3c8663a64225545614ea22b3d18b6ab24"} Feb 16 11:22:45 crc kubenswrapper[4797]: E0216 11:22:45.358073 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl" podUID="ae9b635f-ae0c-4d62-9860-a9817b6d668e" Feb 16 11:22:45 crc kubenswrapper[4797]: E0216 11:22:45.367112 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd\\\"\"" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn" podUID="49b234d6-478d-44ec-9164-9482c3242ea2" Feb 16 11:22:45 crc kubenswrapper[4797]: E0216 11:22:45.367191 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn" podUID="2322e8ef-0322-48a9-85fc-95345d68dea3" Feb 16 11:22:45 crc kubenswrapper[4797]: E0216 11:22:45.367373 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh" podUID="7745cf21-caab-4866-99f1-f2d819e779d3" Feb 16 11:22:45 crc kubenswrapper[4797]: E0216 11:22:45.367665 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9" podUID="669a405d-b513-461b-9d3d-fe7938e08dec" Feb 16 11:22:45 crc kubenswrapper[4797]: E0216 11:22:45.369104 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6\\\"\"" pod="openstack-operators/test-operator-controller-manager-7866795846-rk7qz" podUID="c05e0068-d50b-459b-86ab-b076230093b6" Feb 16 11:22:45 crc kubenswrapper[4797]: E0216 11:22:45.374130 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t" podUID="ca037f0f-8b30-4f77-b039-b4d92368af5a" Feb 16 11:22:45 crc kubenswrapper[4797]: I0216 11:22:45.741964 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert\") pod \"infra-operator-controller-manager-79d975b745-l6xg9\" (UID: \"c5013d9b-4630-450f-80bf-312fbc3256ec\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:22:45 crc kubenswrapper[4797]: E0216 11:22:45.742989 4797 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 11:22:45 crc kubenswrapper[4797]: E0216 11:22:45.743190 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert podName:c5013d9b-4630-450f-80bf-312fbc3256ec nodeName:}" failed. No retries permitted until 2026-02-16 11:22:49.743063769 +0000 UTC m=+964.463248789 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert") pod "infra-operator-controller-manager-79d975b745-l6xg9" (UID: "c5013d9b-4630-450f-80bf-312fbc3256ec") : secret "infra-operator-webhook-server-cert" not found Feb 16 11:22:45 crc kubenswrapper[4797]: I0216 11:22:45.944660 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg\" (UID: \"4521c529-8b50-4fd0-8696-b1207798e1f5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:22:45 crc kubenswrapper[4797]: E0216 11:22:45.944790 4797 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 11:22:45 crc kubenswrapper[4797]: E0216 11:22:45.944841 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert podName:4521c529-8b50-4fd0-8696-b1207798e1f5 nodeName:}" failed. No retries permitted until 2026-02-16 11:22:49.944826522 +0000 UTC m=+964.665011502 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" (UID: "4521c529-8b50-4fd0-8696-b1207798e1f5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 11:22:46 crc kubenswrapper[4797]: I0216 11:22:46.350869 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:46 crc kubenswrapper[4797]: I0216 11:22:46.351223 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:46 crc kubenswrapper[4797]: E0216 11:22:46.351077 4797 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 11:22:46 crc kubenswrapper[4797]: E0216 11:22:46.351455 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs podName:40a83645-f1ce-4393-90d1-7fb9d2144bfa nodeName:}" failed. No retries permitted until 2026-02-16 11:22:50.351437231 +0000 UTC m=+965.071622211 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs") pod "openstack-operator-controller-manager-6b65fbbb9f-7rjfc" (UID: "40a83645-f1ce-4393-90d1-7fb9d2144bfa") : secret "webhook-server-cert" not found Feb 16 11:22:46 crc kubenswrapper[4797]: E0216 11:22:46.351395 4797 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 11:22:46 crc kubenswrapper[4797]: E0216 11:22:46.351713 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs podName:40a83645-f1ce-4393-90d1-7fb9d2144bfa nodeName:}" failed. No retries permitted until 2026-02-16 11:22:50.351635297 +0000 UTC m=+965.071820287 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs") pod "openstack-operator-controller-manager-6b65fbbb9f-7rjfc" (UID: "40a83645-f1ce-4393-90d1-7fb9d2144bfa") : secret "metrics-server-cert" not found Feb 16 11:22:49 crc kubenswrapper[4797]: I0216 11:22:49.804927 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert\") pod \"infra-operator-controller-manager-79d975b745-l6xg9\" (UID: \"c5013d9b-4630-450f-80bf-312fbc3256ec\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:22:49 crc kubenswrapper[4797]: E0216 11:22:49.805142 4797 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 11:22:49 crc kubenswrapper[4797]: E0216 11:22:49.805331 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert podName:c5013d9b-4630-450f-80bf-312fbc3256ec nodeName:}" failed. No retries permitted until 2026-02-16 11:22:57.805315113 +0000 UTC m=+972.525500093 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert") pod "infra-operator-controller-manager-79d975b745-l6xg9" (UID: "c5013d9b-4630-450f-80bf-312fbc3256ec") : secret "infra-operator-webhook-server-cert" not found Feb 16 11:22:50 crc kubenswrapper[4797]: I0216 11:22:50.007737 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg\" (UID: \"4521c529-8b50-4fd0-8696-b1207798e1f5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:22:50 crc kubenswrapper[4797]: E0216 11:22:50.007903 4797 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 11:22:50 crc kubenswrapper[4797]: E0216 11:22:50.007953 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert podName:4521c529-8b50-4fd0-8696-b1207798e1f5 nodeName:}" failed. No retries permitted until 2026-02-16 11:22:58.00793576 +0000 UTC m=+972.728120740 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" (UID: "4521c529-8b50-4fd0-8696-b1207798e1f5") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 11:22:50 crc kubenswrapper[4797]: I0216 11:22:50.412688 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:50 crc kubenswrapper[4797]: I0216 11:22:50.413035 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:50 crc kubenswrapper[4797]: E0216 11:22:50.412961 4797 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 11:22:50 crc kubenswrapper[4797]: E0216 11:22:50.413233 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs podName:40a83645-f1ce-4393-90d1-7fb9d2144bfa nodeName:}" failed. No retries permitted until 2026-02-16 11:22:58.413219083 +0000 UTC m=+973.133404063 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs") pod "openstack-operator-controller-manager-6b65fbbb9f-7rjfc" (UID: "40a83645-f1ce-4393-90d1-7fb9d2144bfa") : secret "webhook-server-cert" not found Feb 16 11:22:50 crc kubenswrapper[4797]: E0216 11:22:50.413185 4797 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 11:22:50 crc kubenswrapper[4797]: E0216 11:22:50.413767 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs podName:40a83645-f1ce-4393-90d1-7fb9d2144bfa nodeName:}" failed. No retries permitted until 2026-02-16 11:22:58.413759837 +0000 UTC m=+973.133944807 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs") pod "openstack-operator-controller-manager-6b65fbbb9f-7rjfc" (UID: "40a83645-f1ce-4393-90d1-7fb9d2144bfa") : secret "metrics-server-cert" not found Feb 16 11:22:57 crc kubenswrapper[4797]: I0216 11:22:57.833488 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert\") pod \"infra-operator-controller-manager-79d975b745-l6xg9\" (UID: \"c5013d9b-4630-450f-80bf-312fbc3256ec\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:22:57 crc kubenswrapper[4797]: E0216 11:22:57.833900 4797 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 11:22:57 crc kubenswrapper[4797]: E0216 11:22:57.834248 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert podName:c5013d9b-4630-450f-80bf-312fbc3256ec nodeName:}" failed. No retries permitted until 2026-02-16 11:23:13.834217116 +0000 UTC m=+988.554402126 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert") pod "infra-operator-controller-manager-79d975b745-l6xg9" (UID: "c5013d9b-4630-450f-80bf-312fbc3256ec") : secret "infra-operator-webhook-server-cert" not found Feb 16 11:22:58 crc kubenswrapper[4797]: I0216 11:22:58.037510 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg\" (UID: \"4521c529-8b50-4fd0-8696-b1207798e1f5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:22:58 crc kubenswrapper[4797]: I0216 11:22:58.043308 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4521c529-8b50-4fd0-8696-b1207798e1f5-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg\" (UID: \"4521c529-8b50-4fd0-8696-b1207798e1f5\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:22:58 crc kubenswrapper[4797]: I0216 11:22:58.304514 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2ctb6" Feb 16 11:22:58 crc kubenswrapper[4797]: I0216 11:22:58.313240 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:22:58 crc kubenswrapper[4797]: I0216 11:22:58.443409 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:58 crc kubenswrapper[4797]: I0216 11:22:58.444313 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:58 crc kubenswrapper[4797]: I0216 11:22:58.446383 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-webhook-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:58 crc kubenswrapper[4797]: I0216 11:22:58.450626 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/40a83645-f1ce-4393-90d1-7fb9d2144bfa-metrics-certs\") pod \"openstack-operator-controller-manager-6b65fbbb9f-7rjfc\" (UID: \"40a83645-f1ce-4393-90d1-7fb9d2144bfa\") " pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:22:58 crc kubenswrapper[4797]: I0216 11:22:58.735061 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4lfhs" Feb 16 11:22:58 crc kubenswrapper[4797]: I0216 11:22:58.743806 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:23:01 crc kubenswrapper[4797]: E0216 11:23:01.279953 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 11:23:01 crc kubenswrapper[4797]: E0216 11:23:01.280327 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zb64c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-c4rfb_openstack-operators(931bff49-5f65-49a0-8dab-c1b5858ec958): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 11:23:01 crc kubenswrapper[4797]: E0216 11:23:01.282385 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb" podUID="931bff49-5f65-49a0-8dab-c1b5858ec958" Feb 16 11:23:01 crc kubenswrapper[4797]: E0216 11:23:01.527104 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb" podUID="931bff49-5f65-49a0-8dab-c1b5858ec958" Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.210448 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg"] Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.242650 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc"] Feb 16 11:23:02 crc kubenswrapper[4797]: W0216 11:23:02.377775 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4521c529_8b50_4fd0_8696_b1207798e1f5.slice/crio-f781c39667ad34d17e6193fe9f23f0aeff9cc333e58e8c62c323209c1f256653 WatchSource:0}: Error finding container f781c39667ad34d17e6193fe9f23f0aeff9cc333e58e8c62c323209c1f256653: Status 404 returned error can't find the container with id f781c39667ad34d17e6193fe9f23f0aeff9cc333e58e8c62c323209c1f256653 Feb 16 11:23:02 crc kubenswrapper[4797]: W0216 11:23:02.379315 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40a83645_f1ce_4393_90d1_7fb9d2144bfa.slice/crio-eb595ba6ab3c0111cf1e7e4044faacdc04ac4c6c53c2e2ea9d53db63c644311e WatchSource:0}: Error finding container eb595ba6ab3c0111cf1e7e4044faacdc04ac4c6c53c2e2ea9d53db63c644311e: Status 404 returned error can't find the container with id eb595ba6ab3c0111cf1e7e4044faacdc04ac4c6c53c2e2ea9d53db63c644311e Feb 16 11:23:02 crc kubenswrapper[4797]: E0216 11:23:02.453717 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.227:5001/openstack-k8s-operators/telemetry-operator:7c764327dd2ffab22c122e2f1706e47c6eeb2902" Feb 16 11:23:02 crc kubenswrapper[4797]: E0216 11:23:02.453775 4797 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.227:5001/openstack-k8s-operators/telemetry-operator:7c764327dd2ffab22c122e2f1706e47c6eeb2902" Feb 16 11:23:02 crc kubenswrapper[4797]: E0216 11:23:02.454196 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.227:5001/openstack-k8s-operators/telemetry-operator:7c764327dd2ffab22c122e2f1706e47c6eeb2902,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wq7dp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b85768bb-4k7fc_openstack-operators(e88a6189-ad47-438a-baab-3dcc5d781126): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 11:23:02 crc kubenswrapper[4797]: E0216 11:23:02.455468 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc" podUID="e88a6189-ad47-438a-baab-3dcc5d781126" Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.539633 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9" event={"ID":"be2f5af9-52ca-4678-80c6-ad099ddbf8ff","Type":"ContainerStarted","Data":"c230cb08da9d7ceddd7f3b639da17cbb41062b099ab893f07746e113ddcf3eca"} Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.539779 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9" Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.541927 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-llbkc" event={"ID":"f2d64af8-fc1a-4a10-9e9d-ca65cb84dd0f","Type":"ContainerStarted","Data":"94533a64ebf4900bb2cdbbfda9f00ca98eca8567e90e6cbb0800a8b4782bd31d"} Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.555145 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tddsr" event={"ID":"4b4c8cfc-5b6b-45cb-97f6-36c766aa6ad9","Type":"ContainerStarted","Data":"1aafe15c7f31b185290a0b2cd796b2c1e6d1e934c372f91bbe06f908b24b985b"} Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.563570 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bmz7r" event={"ID":"0c242ffd-e8a4-4f19-80e9-957c31876eb2","Type":"ContainerStarted","Data":"94660fe5867c493dfde2cb4534def969e97177c131ed43668fc5f832d0d182ae"} Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.563843 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bmz7r" Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.566054 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9" podStartSLOduration=2.909733026 podStartE2EDuration="21.566032844s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.046831016 +0000 UTC m=+957.767015996" lastFinishedPulling="2026-02-16 11:23:01.703130794 +0000 UTC m=+976.423315814" observedRunningTime="2026-02-16 11:23:02.563607779 +0000 UTC m=+977.283792759" watchObservedRunningTime="2026-02-16 11:23:02.566032844 +0000 UTC m=+977.286217824" Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.570344 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jcjc8" event={"ID":"9e8f1871-1ed7-4ef9-8c88-901a64f13ccd","Type":"ContainerStarted","Data":"80b2b5490459f5a6e5aee7cd750e5e50a32cfb7861889e7dbad7fa5de3bf05f9"} Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.574254 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gp7jv" event={"ID":"e6757076-86f7-48aa-87b3-27d275221210","Type":"ContainerStarted","Data":"038a1e73e3ff4c92b961dc279881e5477b70131c844e93e034e993da35b38220"} Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.574888 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gp7jv" Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.579678 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dn2rf" event={"ID":"3439dee8-2272-41cc-8f20-1011e12202e8","Type":"ContainerStarted","Data":"dd159575058945bf4ecd8cbcadc3e57c8cd8bb19cd79a1fc3f1d3873cebb8492"} Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.582363 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" event={"ID":"4521c529-8b50-4fd0-8696-b1207798e1f5","Type":"ContainerStarted","Data":"f781c39667ad34d17e6193fe9f23f0aeff9cc333e58e8c62c323209c1f256653"} Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.582553 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bmz7r" podStartSLOduration=3.400336753 podStartE2EDuration="21.582534725s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.521144368 +0000 UTC m=+958.241329348" lastFinishedPulling="2026-02-16 11:23:01.70334234 +0000 UTC m=+976.423527320" observedRunningTime="2026-02-16 11:23:02.579958784 +0000 UTC m=+977.300143764" watchObservedRunningTime="2026-02-16 11:23:02.582534725 +0000 UTC m=+977.302719705" Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.593914 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rcgvv" event={"ID":"60624e90-f529-495b-b523-fda5525b3404","Type":"ContainerStarted","Data":"0c246398e1736b77b58ce18c2068bfc5d1f195468dd4ec9c16f07af465be5257"} Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.595673 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" event={"ID":"40a83645-f1ce-4393-90d1-7fb9d2144bfa","Type":"ContainerStarted","Data":"eb595ba6ab3c0111cf1e7e4044faacdc04ac4c6c53c2e2ea9d53db63c644311e"} Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.600769 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5f2m4" event={"ID":"ff2de9ed-5f7c-4cf3-80f0-f0b12901438f","Type":"ContainerStarted","Data":"344ab2a3a53a4f24015f14c7a51a344f907110e54dc5ca3c86c9e806e6eaff30"} Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.603924 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gp7jv" podStartSLOduration=3.4463605 podStartE2EDuration="21.603907139s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.537020961 +0000 UTC m=+958.257205941" lastFinishedPulling="2026-02-16 11:23:01.6945676 +0000 UTC m=+976.414752580" observedRunningTime="2026-02-16 11:23:02.59958658 +0000 UTC m=+977.319771560" watchObservedRunningTime="2026-02-16 11:23:02.603907139 +0000 UTC m=+977.324092109" Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.606398 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ctjqh" event={"ID":"5ec1f813-5b71-4f97-919a-0414a1a7cb73","Type":"ContainerStarted","Data":"111c5d151064a35e09759ce1f59f60924bfb9b125e1a12cd611cd2aa0915913d"} Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.607266 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ctjqh" Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.631157 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92zcz" event={"ID":"0b0f4d9d-f30c-4981-87cf-1ea78972c784","Type":"ContainerStarted","Data":"894a9a9b9dd587aa6a382a7fb047f38ad16f7cb231cafad23505cf318712832b"} Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.631497 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92zcz" Feb 16 11:23:02 crc kubenswrapper[4797]: E0216 11:23:02.632940 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.227:5001/openstack-k8s-operators/telemetry-operator:7c764327dd2ffab22c122e2f1706e47c6eeb2902\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc" podUID="e88a6189-ad47-438a-baab-3dcc5d781126" Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.633179 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ctjqh" podStartSLOduration=3.45188147 podStartE2EDuration="21.633157026s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.520676206 +0000 UTC m=+958.240861186" lastFinishedPulling="2026-02-16 11:23:01.701951722 +0000 UTC m=+976.422136742" observedRunningTime="2026-02-16 11:23:02.630562946 +0000 UTC m=+977.350747936" watchObservedRunningTime="2026-02-16 11:23:02.633157026 +0000 UTC m=+977.353342006" Feb 16 11:23:02 crc kubenswrapper[4797]: I0216 11:23:02.716296 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92zcz" podStartSLOduration=3.520706098 podStartE2EDuration="21.716282415s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.523434051 +0000 UTC m=+958.243619031" lastFinishedPulling="2026-02-16 11:23:01.719010368 +0000 UTC m=+976.439195348" observedRunningTime="2026-02-16 11:23:02.712483742 +0000 UTC m=+977.432668722" watchObservedRunningTime="2026-02-16 11:23:02.716282415 +0000 UTC m=+977.436467395" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.646104 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" event={"ID":"40a83645-f1ce-4393-90d1-7fb9d2144bfa","Type":"ContainerStarted","Data":"07de297a0a390b15d1ec18da763438af9ca5764d89aee2caa7b3f56117600b66"} Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.646434 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.646896 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rcgvv" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.647277 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5f2m4" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.648122 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jcjc8" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.648246 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-llbkc" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.649621 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dn2rf" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.649666 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tddsr" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.704108 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rcgvv" podStartSLOduration=4.544319035 podStartE2EDuration="22.704073525s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.521791856 +0000 UTC m=+958.241976826" lastFinishedPulling="2026-02-16 11:23:01.681546335 +0000 UTC m=+976.401731316" observedRunningTime="2026-02-16 11:23:03.699232182 +0000 UTC m=+978.419417172" watchObservedRunningTime="2026-02-16 11:23:03.704073525 +0000 UTC m=+978.424258505" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.705148 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dn2rf" podStartSLOduration=4.056210586 podStartE2EDuration="22.705139613s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.031035585 +0000 UTC m=+957.751220565" lastFinishedPulling="2026-02-16 11:23:01.679964602 +0000 UTC m=+976.400149592" observedRunningTime="2026-02-16 11:23:03.668140404 +0000 UTC m=+978.388325384" watchObservedRunningTime="2026-02-16 11:23:03.705139613 +0000 UTC m=+978.425324593" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.761741 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" podStartSLOduration=21.761715917 podStartE2EDuration="21.761715917s" podCreationTimestamp="2026-02-16 11:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:23:03.752942348 +0000 UTC m=+978.473127328" watchObservedRunningTime="2026-02-16 11:23:03.761715917 +0000 UTC m=+978.481900897" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.763721 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-llbkc" podStartSLOduration=4.592514372 podStartE2EDuration="22.763708362s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.508888635 +0000 UTC m=+958.229073615" lastFinishedPulling="2026-02-16 11:23:01.680082585 +0000 UTC m=+976.400267605" observedRunningTime="2026-02-16 11:23:03.728646775 +0000 UTC m=+978.448831775" watchObservedRunningTime="2026-02-16 11:23:03.763708362 +0000 UTC m=+978.483893352" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.779087 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tddsr" podStartSLOduration=4.5604152970000005 podStartE2EDuration="22.779068601s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.474416266 +0000 UTC m=+958.194601246" lastFinishedPulling="2026-02-16 11:23:01.69306957 +0000 UTC m=+976.413254550" observedRunningTime="2026-02-16 11:23:03.777220891 +0000 UTC m=+978.497405881" watchObservedRunningTime="2026-02-16 11:23:03.779068601 +0000 UTC m=+978.499253581" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.818927 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5f2m4" podStartSLOduration=4.610595946 podStartE2EDuration="22.818862178s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.49368421 +0000 UTC m=+958.213869190" lastFinishedPulling="2026-02-16 11:23:01.701950442 +0000 UTC m=+976.422135422" observedRunningTime="2026-02-16 11:23:03.812433871 +0000 UTC m=+978.532618851" watchObservedRunningTime="2026-02-16 11:23:03.818862178 +0000 UTC m=+978.539047178" Feb 16 11:23:03 crc kubenswrapper[4797]: I0216 11:23:03.839887 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jcjc8" podStartSLOduration=4.584257281 podStartE2EDuration="22.83986321s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.425740961 +0000 UTC m=+958.145925941" lastFinishedPulling="2026-02-16 11:23:01.68134685 +0000 UTC m=+976.401531870" observedRunningTime="2026-02-16 11:23:03.833467006 +0000 UTC m=+978.553652026" watchObservedRunningTime="2026-02-16 11:23:03.83986321 +0000 UTC m=+978.560048190" Feb 16 11:23:08 crc kubenswrapper[4797]: I0216 11:23:08.750936 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6b65fbbb9f-7rjfc" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.703955 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.705420 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.741320 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t" event={"ID":"ca037f0f-8b30-4f77-b039-b4d92368af5a","Type":"ContainerStarted","Data":"cd5f1b39f22ce2137b34da7f9f267869f95f6f4a72d3612fbcc924e19a636ae9"} Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.742647 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.744025 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn" event={"ID":"49b234d6-478d-44ec-9164-9482c3242ea2","Type":"ContainerStarted","Data":"c4e377e01758ed2bdc9079f106765384ea65370deb595f40da555bfac621a77a"} Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.744675 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.746138 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" event={"ID":"4521c529-8b50-4fd0-8696-b1207798e1f5","Type":"ContainerStarted","Data":"f085e4c052316a629860a31dda5b7cefc5b54d7cb059536f53bd30e8bdae6e93"} Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.746734 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.748065 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh" event={"ID":"7745cf21-caab-4866-99f1-f2d819e779d3","Type":"ContainerStarted","Data":"8f2a49486cf67acab0b9262d98649df351555d34c5b76e5177ed7ddc588f4b97"} Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.748636 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.750255 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9" event={"ID":"669a405d-b513-461b-9d3d-fe7938e08dec","Type":"ContainerStarted","Data":"ae630256f7a1cbf4ee6e26e200a8575033f0a20d77fcef677aded49ae5d91471"} Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.750652 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.751811 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-rk7qz" event={"ID":"c05e0068-d50b-459b-86ab-b076230093b6","Type":"ContainerStarted","Data":"58857f424fc7deb9d0433730c5f0294c439df581f7fbdadd7ab6205c005f4cfa"} Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.752329 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-rk7qz" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.754007 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl" event={"ID":"ae9b635f-ae0c-4d62-9860-a9817b6d668e","Type":"ContainerStarted","Data":"38ae44e5934d5b8f5674ce9e32f74af2cf7dab574c598ebff5af8e64bc14616d"} Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.754191 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.755326 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn" event={"ID":"2322e8ef-0322-48a9-85fc-95345d68dea3","Type":"ContainerStarted","Data":"a9cc5030c6ca654bccf44941e5bb6bab9d6e65223a5e0e4966544b6b51c39124"} Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.768325 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t" podStartSLOduration=2.451054479 podStartE2EDuration="29.768303394s" podCreationTimestamp="2026-02-16 11:22:42 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.784830027 +0000 UTC m=+958.505015007" lastFinishedPulling="2026-02-16 11:23:11.102078942 +0000 UTC m=+985.822263922" observedRunningTime="2026-02-16 11:23:11.760631936 +0000 UTC m=+986.480816936" watchObservedRunningTime="2026-02-16 11:23:11.768303394 +0000 UTC m=+986.488488374" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.778447 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9" podStartSLOduration=3.472327285 podStartE2EDuration="30.778431871s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.698784024 +0000 UTC m=+958.418969004" lastFinishedPulling="2026-02-16 11:23:11.00488861 +0000 UTC m=+985.725073590" observedRunningTime="2026-02-16 11:23:11.774271787 +0000 UTC m=+986.494456777" watchObservedRunningTime="2026-02-16 11:23:11.778431871 +0000 UTC m=+986.498616851" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.793406 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c9frn" podStartSLOduration=2.475285281 podStartE2EDuration="29.793391739s" podCreationTimestamp="2026-02-16 11:22:42 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.786559124 +0000 UTC m=+958.506744104" lastFinishedPulling="2026-02-16 11:23:11.104665582 +0000 UTC m=+985.824850562" observedRunningTime="2026-02-16 11:23:11.789722199 +0000 UTC m=+986.509907179" watchObservedRunningTime="2026-02-16 11:23:11.793391739 +0000 UTC m=+986.513576719" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.812359 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-rk7qz" podStartSLOduration=2.5604525430000002 podStartE2EDuration="29.812344787s" podCreationTimestamp="2026-02-16 11:22:42 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.826381458 +0000 UTC m=+958.546566428" lastFinishedPulling="2026-02-16 11:23:11.078273682 +0000 UTC m=+985.798458672" observedRunningTime="2026-02-16 11:23:11.810674881 +0000 UTC m=+986.530859861" watchObservedRunningTime="2026-02-16 11:23:11.812344787 +0000 UTC m=+986.532529767" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.834820 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl" podStartSLOduration=2.615526138 podStartE2EDuration="29.83480365s" podCreationTimestamp="2026-02-16 11:22:42 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.785011942 +0000 UTC m=+958.505196922" lastFinishedPulling="2026-02-16 11:23:11.004289454 +0000 UTC m=+985.724474434" observedRunningTime="2026-02-16 11:23:11.831111609 +0000 UTC m=+986.551296589" watchObservedRunningTime="2026-02-16 11:23:11.83480365 +0000 UTC m=+986.554988630" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.858510 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" podStartSLOduration=22.171305593 podStartE2EDuration="30.858489556s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:23:02.380738527 +0000 UTC m=+977.100923507" lastFinishedPulling="2026-02-16 11:23:11.06792249 +0000 UTC m=+985.788107470" observedRunningTime="2026-02-16 11:23:11.852397439 +0000 UTC m=+986.572582419" watchObservedRunningTime="2026-02-16 11:23:11.858489556 +0000 UTC m=+986.578674536" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.872015 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn" podStartSLOduration=3.505527551 podStartE2EDuration="30.871999194s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.701041816 +0000 UTC m=+958.421226796" lastFinishedPulling="2026-02-16 11:23:11.067513459 +0000 UTC m=+985.787698439" observedRunningTime="2026-02-16 11:23:11.870001201 +0000 UTC m=+986.590186181" watchObservedRunningTime="2026-02-16 11:23:11.871999194 +0000 UTC m=+986.592184174" Feb 16 11:23:11 crc kubenswrapper[4797]: I0216 11:23:11.889966 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh" podStartSLOduration=3.543311742 podStartE2EDuration="30.889940145s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.720847545 +0000 UTC m=+958.441032525" lastFinishedPulling="2026-02-16 11:23:11.067475948 +0000 UTC m=+985.787660928" observedRunningTime="2026-02-16 11:23:11.885560695 +0000 UTC m=+986.605745675" watchObservedRunningTime="2026-02-16 11:23:11.889940145 +0000 UTC m=+986.610125125" Feb 16 11:23:12 crc kubenswrapper[4797]: I0216 11:23:12.066599 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-tddsr" Feb 16 11:23:12 crc kubenswrapper[4797]: I0216 11:23:12.093422 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-dn2rf" Feb 16 11:23:12 crc kubenswrapper[4797]: I0216 11:23:12.093469 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9" Feb 16 11:23:12 crc kubenswrapper[4797]: I0216 11:23:12.189051 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-llbkc" Feb 16 11:23:12 crc kubenswrapper[4797]: I0216 11:23:12.260876 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-bmz7r" Feb 16 11:23:12 crc kubenswrapper[4797]: I0216 11:23:12.277880 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-jcjc8" Feb 16 11:23:12 crc kubenswrapper[4797]: I0216 11:23:12.300905 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-5f2m4" Feb 16 11:23:12 crc kubenswrapper[4797]: I0216 11:23:12.429122 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-ctjqh" Feb 16 11:23:12 crc kubenswrapper[4797]: I0216 11:23:12.493551 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-92zcz" Feb 16 11:23:12 crc kubenswrapper[4797]: I0216 11:23:12.497836 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-gp7jv" Feb 16 11:23:12 crc kubenswrapper[4797]: I0216 11:23:12.558423 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-rcgvv" Feb 16 11:23:13 crc kubenswrapper[4797]: I0216 11:23:13.920413 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert\") pod \"infra-operator-controller-manager-79d975b745-l6xg9\" (UID: \"c5013d9b-4630-450f-80bf-312fbc3256ec\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:23:13 crc kubenswrapper[4797]: I0216 11:23:13.926909 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c5013d9b-4630-450f-80bf-312fbc3256ec-cert\") pod \"infra-operator-controller-manager-79d975b745-l6xg9\" (UID: \"c5013d9b-4630-450f-80bf-312fbc3256ec\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:23:14 crc kubenswrapper[4797]: I0216 11:23:14.030014 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-sq7kt" Feb 16 11:23:14 crc kubenswrapper[4797]: I0216 11:23:14.039051 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:23:14 crc kubenswrapper[4797]: I0216 11:23:14.270697 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9"] Feb 16 11:23:14 crc kubenswrapper[4797]: I0216 11:23:14.774670 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" event={"ID":"c5013d9b-4630-450f-80bf-312fbc3256ec","Type":"ContainerStarted","Data":"289fdd11a9be1c08a9ba35839fe26c3f96cf0dcdccbfe634d0e2e2e67cc8091a"} Feb 16 11:23:15 crc kubenswrapper[4797]: I0216 11:23:15.783543 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb" event={"ID":"931bff49-5f65-49a0-8dab-c1b5858ec958","Type":"ContainerStarted","Data":"d50c37aa66c95498022910fb67bc5bb0454e5fc12471417d318840db2a26dca2"} Feb 16 11:23:15 crc kubenswrapper[4797]: I0216 11:23:15.784649 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb" Feb 16 11:23:15 crc kubenswrapper[4797]: I0216 11:23:15.786008 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc" event={"ID":"e88a6189-ad47-438a-baab-3dcc5d781126","Type":"ContainerStarted","Data":"175bb698975f6da4b7a7b758011d14b9ecc60add2f47c38ce251420de795d7b1"} Feb 16 11:23:15 crc kubenswrapper[4797]: I0216 11:23:15.786346 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc" Feb 16 11:23:15 crc kubenswrapper[4797]: I0216 11:23:15.805676 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb" podStartSLOduration=2.7977272319999997 podStartE2EDuration="34.805657042s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.430738767 +0000 UTC m=+958.150923747" lastFinishedPulling="2026-02-16 11:23:15.438668577 +0000 UTC m=+990.158853557" observedRunningTime="2026-02-16 11:23:15.801652583 +0000 UTC m=+990.521837583" watchObservedRunningTime="2026-02-16 11:23:15.805657042 +0000 UTC m=+990.525842022" Feb 16 11:23:15 crc kubenswrapper[4797]: I0216 11:23:15.824063 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc" podStartSLOduration=2.434004324 podStartE2EDuration="33.824042935s" podCreationTimestamp="2026-02-16 11:22:42 +0000 UTC" firstStartedPulling="2026-02-16 11:22:43.657161942 +0000 UTC m=+958.377346922" lastFinishedPulling="2026-02-16 11:23:15.047200553 +0000 UTC m=+989.767385533" observedRunningTime="2026-02-16 11:23:15.823192142 +0000 UTC m=+990.543377122" watchObservedRunningTime="2026-02-16 11:23:15.824042935 +0000 UTC m=+990.544227915" Feb 16 11:23:16 crc kubenswrapper[4797]: I0216 11:23:16.808676 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" event={"ID":"c5013d9b-4630-450f-80bf-312fbc3256ec","Type":"ContainerStarted","Data":"700f497be8b0f2759ccbbab11e9f3e768012f08e69736301b698c4c42e30934d"} Feb 16 11:23:16 crc kubenswrapper[4797]: I0216 11:23:16.810285 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:23:16 crc kubenswrapper[4797]: I0216 11:23:16.844603 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" podStartSLOduration=33.674631164 podStartE2EDuration="35.844561066s" podCreationTimestamp="2026-02-16 11:22:41 +0000 UTC" firstStartedPulling="2026-02-16 11:23:14.276111378 +0000 UTC m=+988.996296358" lastFinishedPulling="2026-02-16 11:23:16.44604128 +0000 UTC m=+991.166226260" observedRunningTime="2026-02-16 11:23:16.838415459 +0000 UTC m=+991.558600439" watchObservedRunningTime="2026-02-16 11:23:16.844561066 +0000 UTC m=+991.564746046" Feb 16 11:23:18 crc kubenswrapper[4797]: I0216 11:23:18.319636 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg" Feb 16 11:23:22 crc kubenswrapper[4797]: I0216 11:23:22.253177 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-c4rfb" Feb 16 11:23:22 crc kubenswrapper[4797]: I0216 11:23:22.589805 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-9tbc9" Feb 16 11:23:22 crc kubenswrapper[4797]: I0216 11:23:22.596190 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-q54xh" Feb 16 11:23:22 crc kubenswrapper[4797]: I0216 11:23:22.618177 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-czcgn" Feb 16 11:23:22 crc kubenswrapper[4797]: I0216 11:23:22.723293 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-pvbnl" Feb 16 11:23:22 crc kubenswrapper[4797]: I0216 11:23:22.733914 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b85768bb-4k7fc" Feb 16 11:23:22 crc kubenswrapper[4797]: I0216 11:23:22.750489 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-rk7qz" Feb 16 11:23:22 crc kubenswrapper[4797]: I0216 11:23:22.820359 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-vxl6t" Feb 16 11:23:24 crc kubenswrapper[4797]: I0216 11:23:24.052768 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-l6xg9" Feb 16 11:23:41 crc kubenswrapper[4797]: I0216 11:23:41.704059 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:23:41 crc kubenswrapper[4797]: I0216 11:23:41.704525 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:23:41 crc kubenswrapper[4797]: I0216 11:23:41.704611 4797 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:23:41 crc kubenswrapper[4797]: I0216 11:23:41.705460 4797 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dbd4bdb440a4910da1233a40f8d1a68f6e489c128c5069841c232e62039aea64"} pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:23:41 crc kubenswrapper[4797]: I0216 11:23:41.705557 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" containerID="cri-o://dbd4bdb440a4910da1233a40f8d1a68f6e489c128c5069841c232e62039aea64" gracePeriod=600 Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.053265 4797 generic.go:334] "Generic (PLEG): container finished" podID="128f4e85-fd17-4281-97d2-872fda792b21" containerID="dbd4bdb440a4910da1233a40f8d1a68f6e489c128c5069841c232e62039aea64" exitCode=0 Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.053306 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerDied","Data":"dbd4bdb440a4910da1233a40f8d1a68f6e489c128c5069841c232e62039aea64"} Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.053335 4797 scope.go:117] "RemoveContainer" containerID="e7af1c89447ff7ab76e09ca5508cebe1098d580ac409a9bf112a6d6541596109" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.191194 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bw5r7"] Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.193059 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bw5r7" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.195165 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-566gf" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.201258 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.201528 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.201598 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.202993 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bw5r7"] Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.281476 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5qd52"] Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.283046 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.285747 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.294253 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5qd52"] Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.313484 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqwtj\" (UniqueName: \"kubernetes.io/projected/0944b587-50ee-4d4f-93e2-b84ad7cdce7b-kube-api-access-dqwtj\") pod \"dnsmasq-dns-675f4bcbfc-bw5r7\" (UID: \"0944b587-50ee-4d4f-93e2-b84ad7cdce7b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bw5r7" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.313592 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0944b587-50ee-4d4f-93e2-b84ad7cdce7b-config\") pod \"dnsmasq-dns-675f4bcbfc-bw5r7\" (UID: \"0944b587-50ee-4d4f-93e2-b84ad7cdce7b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bw5r7" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.415264 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqwtj\" (UniqueName: \"kubernetes.io/projected/0944b587-50ee-4d4f-93e2-b84ad7cdce7b-kube-api-access-dqwtj\") pod \"dnsmasq-dns-675f4bcbfc-bw5r7\" (UID: \"0944b587-50ee-4d4f-93e2-b84ad7cdce7b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bw5r7" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.415351 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b7ee09-6548-4f55-b125-65be5d58fcba-config\") pod \"dnsmasq-dns-78dd6ddcc-5qd52\" (UID: \"15b7ee09-6548-4f55-b125-65be5d58fcba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.415377 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0944b587-50ee-4d4f-93e2-b84ad7cdce7b-config\") pod \"dnsmasq-dns-675f4bcbfc-bw5r7\" (UID: \"0944b587-50ee-4d4f-93e2-b84ad7cdce7b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bw5r7" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.415407 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtmk2\" (UniqueName: \"kubernetes.io/projected/15b7ee09-6548-4f55-b125-65be5d58fcba-kube-api-access-wtmk2\") pod \"dnsmasq-dns-78dd6ddcc-5qd52\" (UID: \"15b7ee09-6548-4f55-b125-65be5d58fcba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.415439 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15b7ee09-6548-4f55-b125-65be5d58fcba-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5qd52\" (UID: \"15b7ee09-6548-4f55-b125-65be5d58fcba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.416300 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0944b587-50ee-4d4f-93e2-b84ad7cdce7b-config\") pod \"dnsmasq-dns-675f4bcbfc-bw5r7\" (UID: \"0944b587-50ee-4d4f-93e2-b84ad7cdce7b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bw5r7" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.437303 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqwtj\" (UniqueName: \"kubernetes.io/projected/0944b587-50ee-4d4f-93e2-b84ad7cdce7b-kube-api-access-dqwtj\") pod \"dnsmasq-dns-675f4bcbfc-bw5r7\" (UID: \"0944b587-50ee-4d4f-93e2-b84ad7cdce7b\") " pod="openstack/dnsmasq-dns-675f4bcbfc-bw5r7" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.517601 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtmk2\" (UniqueName: \"kubernetes.io/projected/15b7ee09-6548-4f55-b125-65be5d58fcba-kube-api-access-wtmk2\") pod \"dnsmasq-dns-78dd6ddcc-5qd52\" (UID: \"15b7ee09-6548-4f55-b125-65be5d58fcba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.517678 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15b7ee09-6548-4f55-b125-65be5d58fcba-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5qd52\" (UID: \"15b7ee09-6548-4f55-b125-65be5d58fcba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.517786 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b7ee09-6548-4f55-b125-65be5d58fcba-config\") pod \"dnsmasq-dns-78dd6ddcc-5qd52\" (UID: \"15b7ee09-6548-4f55-b125-65be5d58fcba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.518604 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15b7ee09-6548-4f55-b125-65be5d58fcba-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-5qd52\" (UID: \"15b7ee09-6548-4f55-b125-65be5d58fcba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.518925 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b7ee09-6548-4f55-b125-65be5d58fcba-config\") pod \"dnsmasq-dns-78dd6ddcc-5qd52\" (UID: \"15b7ee09-6548-4f55-b125-65be5d58fcba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.519087 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bw5r7" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.540050 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtmk2\" (UniqueName: \"kubernetes.io/projected/15b7ee09-6548-4f55-b125-65be5d58fcba-kube-api-access-wtmk2\") pod \"dnsmasq-dns-78dd6ddcc-5qd52\" (UID: \"15b7ee09-6548-4f55-b125-65be5d58fcba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.600872 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" Feb 16 11:23:42 crc kubenswrapper[4797]: I0216 11:23:42.949967 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bw5r7"] Feb 16 11:23:42 crc kubenswrapper[4797]: W0216 11:23:42.959164 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0944b587_50ee_4d4f_93e2_b84ad7cdce7b.slice/crio-bac3bfa1c71183283c2af90ecda82396b3ed76c095400b9a22a7b0f2878a44da WatchSource:0}: Error finding container bac3bfa1c71183283c2af90ecda82396b3ed76c095400b9a22a7b0f2878a44da: Status 404 returned error can't find the container with id bac3bfa1c71183283c2af90ecda82396b3ed76c095400b9a22a7b0f2878a44da Feb 16 11:23:43 crc kubenswrapper[4797]: I0216 11:23:43.063465 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bw5r7" event={"ID":"0944b587-50ee-4d4f-93e2-b84ad7cdce7b","Type":"ContainerStarted","Data":"bac3bfa1c71183283c2af90ecda82396b3ed76c095400b9a22a7b0f2878a44da"} Feb 16 11:23:43 crc kubenswrapper[4797]: I0216 11:23:43.063731 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5qd52"] Feb 16 11:23:43 crc kubenswrapper[4797]: W0216 11:23:43.064243 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15b7ee09_6548_4f55_b125_65be5d58fcba.slice/crio-e4adc02a4d4d6aef67c91a571652ad441ca18e1914e00d69324f23a103737550 WatchSource:0}: Error finding container e4adc02a4d4d6aef67c91a571652ad441ca18e1914e00d69324f23a103737550: Status 404 returned error can't find the container with id e4adc02a4d4d6aef67c91a571652ad441ca18e1914e00d69324f23a103737550 Feb 16 11:23:43 crc kubenswrapper[4797]: I0216 11:23:43.066557 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"ba3093423333884d09bb1138cadcee536dc44a6bdfca7536ddc371719d3f0a4a"} Feb 16 11:23:44 crc kubenswrapper[4797]: I0216 11:23:44.075222 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" event={"ID":"15b7ee09-6548-4f55-b125-65be5d58fcba","Type":"ContainerStarted","Data":"e4adc02a4d4d6aef67c91a571652ad441ca18e1914e00d69324f23a103737550"} Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.021877 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bw5r7"] Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.039758 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9qgrb"] Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.042889 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.056239 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9qgrb"] Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.158964 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66691ed-2117-49d8-b5fd-5c5281295b31-config\") pod \"dnsmasq-dns-666b6646f7-9qgrb\" (UID: \"d66691ed-2117-49d8-b5fd-5c5281295b31\") " pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.159050 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66691ed-2117-49d8-b5fd-5c5281295b31-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9qgrb\" (UID: \"d66691ed-2117-49d8-b5fd-5c5281295b31\") " pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.159147 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbrvm\" (UniqueName: \"kubernetes.io/projected/d66691ed-2117-49d8-b5fd-5c5281295b31-kube-api-access-jbrvm\") pod \"dnsmasq-dns-666b6646f7-9qgrb\" (UID: \"d66691ed-2117-49d8-b5fd-5c5281295b31\") " pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.261126 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbrvm\" (UniqueName: \"kubernetes.io/projected/d66691ed-2117-49d8-b5fd-5c5281295b31-kube-api-access-jbrvm\") pod \"dnsmasq-dns-666b6646f7-9qgrb\" (UID: \"d66691ed-2117-49d8-b5fd-5c5281295b31\") " pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.261209 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66691ed-2117-49d8-b5fd-5c5281295b31-config\") pod \"dnsmasq-dns-666b6646f7-9qgrb\" (UID: \"d66691ed-2117-49d8-b5fd-5c5281295b31\") " pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.261264 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66691ed-2117-49d8-b5fd-5c5281295b31-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9qgrb\" (UID: \"d66691ed-2117-49d8-b5fd-5c5281295b31\") " pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.262311 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66691ed-2117-49d8-b5fd-5c5281295b31-dns-svc\") pod \"dnsmasq-dns-666b6646f7-9qgrb\" (UID: \"d66691ed-2117-49d8-b5fd-5c5281295b31\") " pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.267600 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66691ed-2117-49d8-b5fd-5c5281295b31-config\") pod \"dnsmasq-dns-666b6646f7-9qgrb\" (UID: \"d66691ed-2117-49d8-b5fd-5c5281295b31\") " pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.293499 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbrvm\" (UniqueName: \"kubernetes.io/projected/d66691ed-2117-49d8-b5fd-5c5281295b31-kube-api-access-jbrvm\") pod \"dnsmasq-dns-666b6646f7-9qgrb\" (UID: \"d66691ed-2117-49d8-b5fd-5c5281295b31\") " pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.313536 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5qd52"] Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.335295 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jqc8n"] Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.336611 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.350916 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jqc8n"] Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.379062 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.464349 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-config\") pod \"dnsmasq-dns-57d769cc4f-jqc8n\" (UID: \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.464437 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jqc8n\" (UID: \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.464496 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfpgv\" (UniqueName: \"kubernetes.io/projected/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-kube-api-access-rfpgv\") pod \"dnsmasq-dns-57d769cc4f-jqc8n\" (UID: \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.566440 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-config\") pod \"dnsmasq-dns-57d769cc4f-jqc8n\" (UID: \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.566862 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jqc8n\" (UID: \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.566900 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfpgv\" (UniqueName: \"kubernetes.io/projected/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-kube-api-access-rfpgv\") pod \"dnsmasq-dns-57d769cc4f-jqc8n\" (UID: \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.569448 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-config\") pod \"dnsmasq-dns-57d769cc4f-jqc8n\" (UID: \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.570028 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jqc8n\" (UID: \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.589004 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfpgv\" (UniqueName: \"kubernetes.io/projected/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-kube-api-access-rfpgv\") pod \"dnsmasq-dns-57d769cc4f-jqc8n\" (UID: \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\") " pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:23:45 crc kubenswrapper[4797]: I0216 11:23:45.697081 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.186913 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.188262 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.191755 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.191962 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.192134 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.192299 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-2ltp5" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.194611 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.194834 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.195116 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.200253 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.279546 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.279613 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.279638 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-430f53e4-732c-4ea9-b53a-bdd2aefc3e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-430f53e4-732c-4ea9-b53a-bdd2aefc3e69\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.279657 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.279685 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z9zm\" (UniqueName: \"kubernetes.io/projected/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-kube-api-access-2z9zm\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.279720 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.279739 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.279775 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.279795 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.279818 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-config-data\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.279852 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.381412 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.381491 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-config-data\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.381558 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.381678 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.381737 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.381792 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-430f53e4-732c-4ea9-b53a-bdd2aefc3e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-430f53e4-732c-4ea9-b53a-bdd2aefc3e69\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.381830 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.381882 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z9zm\" (UniqueName: \"kubernetes.io/projected/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-kube-api-access-2z9zm\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.381952 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.381989 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.382050 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.383111 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.383384 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.383497 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.383520 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-config-data\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.386725 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.388009 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.388055 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-430f53e4-732c-4ea9-b53a-bdd2aefc3e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-430f53e4-732c-4ea9-b53a-bdd2aefc3e69\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7a15898c08795a0cbbd63b934432240761d6609e2acce25618c2431202c98275/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.388069 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.389736 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.389858 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.390215 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.417104 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-430f53e4-732c-4ea9-b53a-bdd2aefc3e69\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-430f53e4-732c-4ea9-b53a-bdd2aefc3e69\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.428109 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z9zm\" (UniqueName: \"kubernetes.io/projected/40b82cbf-8ce3-45e9-a87e-a96cbe83488c-kube-api-access-2z9zm\") pod \"rabbitmq-server-0\" (UID: \"40b82cbf-8ce3-45e9-a87e-a96cbe83488c\") " pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.444511 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.446336 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.453631 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.453678 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.453731 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.453916 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.453950 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8kmvx" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.454134 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.454220 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.465259 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.559708 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.584100 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1aa87d44-dc52-4398-a8f5-0adf7d33966e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.584149 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1aa87d44-dc52-4398-a8f5-0adf7d33966e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.584176 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1aa87d44-dc52-4398-a8f5-0adf7d33966e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.584214 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1aa87d44-dc52-4398-a8f5-0adf7d33966e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.584283 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1aa87d44-dc52-4398-a8f5-0adf7d33966e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.584308 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t4qd\" (UniqueName: \"kubernetes.io/projected/1aa87d44-dc52-4398-a8f5-0adf7d33966e-kube-api-access-8t4qd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.584404 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1aa87d44-dc52-4398-a8f5-0adf7d33966e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.584509 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1aa87d44-dc52-4398-a8f5-0adf7d33966e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.584554 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3e7106d5-dfa4-4c24-aae4-c27e659bdd00\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e7106d5-dfa4-4c24-aae4-c27e659bdd00\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.584704 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1aa87d44-dc52-4398-a8f5-0adf7d33966e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.584771 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1aa87d44-dc52-4398-a8f5-0adf7d33966e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.686193 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1aa87d44-dc52-4398-a8f5-0adf7d33966e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.686262 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1aa87d44-dc52-4398-a8f5-0adf7d33966e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.686299 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-3e7106d5-dfa4-4c24-aae4-c27e659bdd00\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e7106d5-dfa4-4c24-aae4-c27e659bdd00\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.686333 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1aa87d44-dc52-4398-a8f5-0adf7d33966e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.686358 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1aa87d44-dc52-4398-a8f5-0adf7d33966e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.686403 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1aa87d44-dc52-4398-a8f5-0adf7d33966e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.686420 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1aa87d44-dc52-4398-a8f5-0adf7d33966e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.686438 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1aa87d44-dc52-4398-a8f5-0adf7d33966e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.686463 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1aa87d44-dc52-4398-a8f5-0adf7d33966e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.686495 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1aa87d44-dc52-4398-a8f5-0adf7d33966e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.686512 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t4qd\" (UniqueName: \"kubernetes.io/projected/1aa87d44-dc52-4398-a8f5-0adf7d33966e-kube-api-access-8t4qd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.687343 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/1aa87d44-dc52-4398-a8f5-0adf7d33966e-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.687875 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/1aa87d44-dc52-4398-a8f5-0adf7d33966e-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.687957 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1aa87d44-dc52-4398-a8f5-0adf7d33966e-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.688012 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/1aa87d44-dc52-4398-a8f5-0adf7d33966e-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.688434 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.688469 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-3e7106d5-dfa4-4c24-aae4-c27e659bdd00\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e7106d5-dfa4-4c24-aae4-c27e659bdd00\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9ec714a5712b2510198f64286bf8eea44a0b57526694b901c68fcde530eb92bb/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.688621 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/1aa87d44-dc52-4398-a8f5-0adf7d33966e-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.691887 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/1aa87d44-dc52-4398-a8f5-0adf7d33966e-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.691957 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/1aa87d44-dc52-4398-a8f5-0adf7d33966e-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.692246 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/1aa87d44-dc52-4398-a8f5-0adf7d33966e-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.700066 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/1aa87d44-dc52-4398-a8f5-0adf7d33966e-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.703496 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t4qd\" (UniqueName: \"kubernetes.io/projected/1aa87d44-dc52-4398-a8f5-0adf7d33966e-kube-api-access-8t4qd\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.712012 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-3e7106d5-dfa4-4c24-aae4-c27e659bdd00\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-3e7106d5-dfa4-4c24-aae4-c27e659bdd00\") pod \"rabbitmq-cell1-server-0\" (UID: \"1aa87d44-dc52-4398-a8f5-0adf7d33966e\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:46 crc kubenswrapper[4797]: I0216 11:23:46.786353 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.635527 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.648258 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.651052 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-j26zt" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.651625 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.651700 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.654980 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.661727 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.664542 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.803008 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08b607dd-023c-4050-87d5-58f8f7f1714a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.803140 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvnqr\" (UniqueName: \"kubernetes.io/projected/08b607dd-023c-4050-87d5-58f8f7f1714a-kube-api-access-tvnqr\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.803178 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a41de600-1a61-4b07-9f5a-48f6fe92f8a7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a41de600-1a61-4b07-9f5a-48f6fe92f8a7\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.803199 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/08b607dd-023c-4050-87d5-58f8f7f1714a-config-data-default\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.803230 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b607dd-023c-4050-87d5-58f8f7f1714a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.803285 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/08b607dd-023c-4050-87d5-58f8f7f1714a-kolla-config\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.803315 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b607dd-023c-4050-87d5-58f8f7f1714a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.803381 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/08b607dd-023c-4050-87d5-58f8f7f1714a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.906429 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a41de600-1a61-4b07-9f5a-48f6fe92f8a7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a41de600-1a61-4b07-9f5a-48f6fe92f8a7\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.906764 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/08b607dd-023c-4050-87d5-58f8f7f1714a-config-data-default\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.906793 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b607dd-023c-4050-87d5-58f8f7f1714a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.906816 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/08b607dd-023c-4050-87d5-58f8f7f1714a-kolla-config\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.906843 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b607dd-023c-4050-87d5-58f8f7f1714a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.906868 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/08b607dd-023c-4050-87d5-58f8f7f1714a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.906898 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08b607dd-023c-4050-87d5-58f8f7f1714a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.906960 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvnqr\" (UniqueName: \"kubernetes.io/projected/08b607dd-023c-4050-87d5-58f8f7f1714a-kube-api-access-tvnqr\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.907758 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/08b607dd-023c-4050-87d5-58f8f7f1714a-config-data-default\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.907870 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/08b607dd-023c-4050-87d5-58f8f7f1714a-kolla-config\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.907993 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/08b607dd-023c-4050-87d5-58f8f7f1714a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.909544 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08b607dd-023c-4050-87d5-58f8f7f1714a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.909804 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.909834 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a41de600-1a61-4b07-9f5a-48f6fe92f8a7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a41de600-1a61-4b07-9f5a-48f6fe92f8a7\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3b74772692ea4484a0aae89d58ee73e0f8b6dc1a3ef64d6565abad53d12f3c99/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.914190 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/08b607dd-023c-4050-87d5-58f8f7f1714a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.927513 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b607dd-023c-4050-87d5-58f8f7f1714a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.933371 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvnqr\" (UniqueName: \"kubernetes.io/projected/08b607dd-023c-4050-87d5-58f8f7f1714a-kube-api-access-tvnqr\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.950198 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a41de600-1a61-4b07-9f5a-48f6fe92f8a7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a41de600-1a61-4b07-9f5a-48f6fe92f8a7\") pod \"openstack-galera-0\" (UID: \"08b607dd-023c-4050-87d5-58f8f7f1714a\") " pod="openstack/openstack-galera-0" Feb 16 11:23:47 crc kubenswrapper[4797]: I0216 11:23:47.978099 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.139507 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.140673 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.151255 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.154519 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.154843 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-pp7kw" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.155469 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.157086 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.229640 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.229690 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ljnj\" (UniqueName: \"kubernetes.io/projected/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-kube-api-access-6ljnj\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.229787 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.229821 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.229855 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.229981 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-839ff122-e238-4f01-9b11-f5d99968d62a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-839ff122-e238-4f01-9b11-f5d99968d62a\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.232113 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.232228 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.333248 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.333334 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.333385 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.333405 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ljnj\" (UniqueName: \"kubernetes.io/projected/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-kube-api-access-6ljnj\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.333463 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.333490 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.333740 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.333771 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-839ff122-e238-4f01-9b11-f5d99968d62a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-839ff122-e238-4f01-9b11-f5d99968d62a\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.334028 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.335318 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.337201 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.337716 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.337760 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-839ff122-e238-4f01-9b11-f5d99968d62a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-839ff122-e238-4f01-9b11-f5d99968d62a\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e1a42491bda3c5a6056caebf8d1470f394854aebf5a79314fd163d55a8355901/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.337876 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.339936 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.340278 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.372366 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ljnj\" (UniqueName: \"kubernetes.io/projected/4acd6dc5-d9e3-4a05-aed4-ecc80733f365-kube-api-access-6ljnj\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.379389 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-839ff122-e238-4f01-9b11-f5d99968d62a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-839ff122-e238-4f01-9b11-f5d99968d62a\") pod \"openstack-cell1-galera-0\" (UID: \"4acd6dc5-d9e3-4a05-aed4-ecc80733f365\") " pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.451785 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.453005 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.532321 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.535284 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.535627 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5bpn8" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.535859 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.539112 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.544188 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/517059fd-92d8-4058-b426-5653912b7a41-kolla-config\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.544266 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdgps\" (UniqueName: \"kubernetes.io/projected/517059fd-92d8-4058-b426-5653912b7a41-kube-api-access-tdgps\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.544304 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/517059fd-92d8-4058-b426-5653912b7a41-memcached-tls-certs\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.544332 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/517059fd-92d8-4058-b426-5653912b7a41-config-data\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.544395 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/517059fd-92d8-4058-b426-5653912b7a41-combined-ca-bundle\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.645368 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/517059fd-92d8-4058-b426-5653912b7a41-kolla-config\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.645428 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdgps\" (UniqueName: \"kubernetes.io/projected/517059fd-92d8-4058-b426-5653912b7a41-kube-api-access-tdgps\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.645464 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/517059fd-92d8-4058-b426-5653912b7a41-memcached-tls-certs\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.645490 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/517059fd-92d8-4058-b426-5653912b7a41-config-data\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.645533 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/517059fd-92d8-4058-b426-5653912b7a41-combined-ca-bundle\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.646281 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/517059fd-92d8-4058-b426-5653912b7a41-kolla-config\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.646418 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/517059fd-92d8-4058-b426-5653912b7a41-config-data\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.650039 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/517059fd-92d8-4058-b426-5653912b7a41-memcached-tls-certs\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.652546 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/517059fd-92d8-4058-b426-5653912b7a41-combined-ca-bundle\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.664090 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdgps\" (UniqueName: \"kubernetes.io/projected/517059fd-92d8-4058-b426-5653912b7a41-kube-api-access-tdgps\") pod \"memcached-0\" (UID: \"517059fd-92d8-4058-b426-5653912b7a41\") " pod="openstack/memcached-0" Feb 16 11:23:49 crc kubenswrapper[4797]: I0216 11:23:49.865539 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 11:23:51 crc kubenswrapper[4797]: I0216 11:23:51.658816 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 11:23:51 crc kubenswrapper[4797]: I0216 11:23:51.661446 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 11:23:51 crc kubenswrapper[4797]: I0216 11:23:51.663285 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-5dwrr" Feb 16 11:23:51 crc kubenswrapper[4797]: I0216 11:23:51.670548 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 11:23:51 crc kubenswrapper[4797]: I0216 11:23:51.681724 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn8f5\" (UniqueName: \"kubernetes.io/projected/3bf0ec48-8b5b-4671-b213-f04c4e66ad9e-kube-api-access-qn8f5\") pod \"kube-state-metrics-0\" (UID: \"3bf0ec48-8b5b-4671-b213-f04c4e66ad9e\") " pod="openstack/kube-state-metrics-0" Feb 16 11:23:51 crc kubenswrapper[4797]: I0216 11:23:51.789958 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn8f5\" (UniqueName: \"kubernetes.io/projected/3bf0ec48-8b5b-4671-b213-f04c4e66ad9e-kube-api-access-qn8f5\") pod \"kube-state-metrics-0\" (UID: \"3bf0ec48-8b5b-4671-b213-f04c4e66ad9e\") " pod="openstack/kube-state-metrics-0" Feb 16 11:23:51 crc kubenswrapper[4797]: I0216 11:23:51.815123 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn8f5\" (UniqueName: \"kubernetes.io/projected/3bf0ec48-8b5b-4671-b213-f04c4e66ad9e-kube-api-access-qn8f5\") pod \"kube-state-metrics-0\" (UID: \"3bf0ec48-8b5b-4671-b213-f04c4e66ad9e\") " pod="openstack/kube-state-metrics-0" Feb 16 11:23:51 crc kubenswrapper[4797]: I0216 11:23:51.991799 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.352204 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.354075 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.356746 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.356943 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.358165 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.359016 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.359141 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-7mn75" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.371550 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.398778 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ad8679cc-1167-4feb-a53a-49bded099628-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.399735 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/ad8679cc-1167-4feb-a53a-49bded099628-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.399816 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ad8679cc-1167-4feb-a53a-49bded099628-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.400003 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ad8679cc-1167-4feb-a53a-49bded099628-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.400087 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwmsg\" (UniqueName: \"kubernetes.io/projected/ad8679cc-1167-4feb-a53a-49bded099628-kube-api-access-wwmsg\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.400124 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ad8679cc-1167-4feb-a53a-49bded099628-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.400149 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ad8679cc-1167-4feb-a53a-49bded099628-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.501995 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/ad8679cc-1167-4feb-a53a-49bded099628-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.502061 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ad8679cc-1167-4feb-a53a-49bded099628-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.502132 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ad8679cc-1167-4feb-a53a-49bded099628-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.502178 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwmsg\" (UniqueName: \"kubernetes.io/projected/ad8679cc-1167-4feb-a53a-49bded099628-kube-api-access-wwmsg\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.502205 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ad8679cc-1167-4feb-a53a-49bded099628-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.502232 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ad8679cc-1167-4feb-a53a-49bded099628-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.502285 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ad8679cc-1167-4feb-a53a-49bded099628-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.509626 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/ad8679cc-1167-4feb-a53a-49bded099628-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.514305 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ad8679cc-1167-4feb-a53a-49bded099628-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.526900 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ad8679cc-1167-4feb-a53a-49bded099628-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.528280 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ad8679cc-1167-4feb-a53a-49bded099628-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.529180 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ad8679cc-1167-4feb-a53a-49bded099628-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.535199 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ad8679cc-1167-4feb-a53a-49bded099628-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.560694 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwmsg\" (UniqueName: \"kubernetes.io/projected/ad8679cc-1167-4feb-a53a-49bded099628-kube-api-access-wwmsg\") pod \"alertmanager-metric-storage-0\" (UID: \"ad8679cc-1167-4feb-a53a-49bded099628\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.680222 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.979975 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.988921 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.995704 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.995733 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.995880 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.995967 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.996078 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.996164 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.996254 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-9ggq8" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.996329 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 11:23:52 crc kubenswrapper[4797]: I0216 11:23:52.997264 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.110911 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.110953 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-config\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.110990 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.111015 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.111038 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/113930a6-db19-4e43-bd2b-75ef1d11c021-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.111061 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdp9s\" (UniqueName: \"kubernetes.io/projected/113930a6-db19-4e43-bd2b-75ef1d11c021-kube-api-access-gdp9s\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.111127 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.111152 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.111184 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/113930a6-db19-4e43-bd2b-75ef1d11c021-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.111202 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.212840 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.212965 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.213033 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/113930a6-db19-4e43-bd2b-75ef1d11c021-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.213068 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.213092 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.213782 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.213939 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.213975 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.216787 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/113930a6-db19-4e43-bd2b-75ef1d11c021-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.218213 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-config\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.218619 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.218717 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.218766 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/113930a6-db19-4e43-bd2b-75ef1d11c021-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.218830 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdp9s\" (UniqueName: \"kubernetes.io/projected/113930a6-db19-4e43-bd2b-75ef1d11c021-kube-api-access-gdp9s\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.223640 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.223993 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.224085 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.224128 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1f45b484f2970997eddc6379d7fc57939204465e8f811ff0d82af263170b706/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.225331 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-config\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.240322 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/113930a6-db19-4e43-bd2b-75ef1d11c021-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.243067 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdp9s\" (UniqueName: \"kubernetes.io/projected/113930a6-db19-4e43-bd2b-75ef1d11c021-kube-api-access-gdp9s\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.258092 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\") pod \"prometheus-metric-storage-0\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:53 crc kubenswrapper[4797]: I0216 11:23:53.312267 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.400540 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dht7z"] Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.403277 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.409757 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.413196 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.415002 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-qcgw2" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.420905 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dht7z"] Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.428964 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-zgw2f"] Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.435336 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.438971 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zgw2f"] Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.554943 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3114c460-eb74-48a9-bf0c-d32fe63a71be-var-run\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.555003 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4cd0f86-ee13-4721-b2fe-091b428a14bd-var-run\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.555034 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4cd0f86-ee13-4721-b2fe-091b428a14bd-scripts\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.555108 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3114c460-eb74-48a9-bf0c-d32fe63a71be-var-run-ovn\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.555128 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3114c460-eb74-48a9-bf0c-d32fe63a71be-ovn-controller-tls-certs\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.555157 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d4cd0f86-ee13-4721-b2fe-091b428a14bd-var-log\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.555179 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3114c460-eb74-48a9-bf0c-d32fe63a71be-combined-ca-bundle\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.555197 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7c9g\" (UniqueName: \"kubernetes.io/projected/3114c460-eb74-48a9-bf0c-d32fe63a71be-kube-api-access-f7c9g\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.555214 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d4cd0f86-ee13-4721-b2fe-091b428a14bd-var-lib\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.555254 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mnrb\" (UniqueName: \"kubernetes.io/projected/d4cd0f86-ee13-4721-b2fe-091b428a14bd-kube-api-access-7mnrb\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.555270 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3114c460-eb74-48a9-bf0c-d32fe63a71be-scripts\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.555287 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3114c460-eb74-48a9-bf0c-d32fe63a71be-var-log-ovn\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.555311 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d4cd0f86-ee13-4721-b2fe-091b428a14bd-etc-ovs\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.656906 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3114c460-eb74-48a9-bf0c-d32fe63a71be-var-run\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.656995 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4cd0f86-ee13-4721-b2fe-091b428a14bd-var-run\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657048 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4cd0f86-ee13-4721-b2fe-091b428a14bd-scripts\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657093 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3114c460-eb74-48a9-bf0c-d32fe63a71be-var-run-ovn\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657123 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3114c460-eb74-48a9-bf0c-d32fe63a71be-ovn-controller-tls-certs\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657155 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d4cd0f86-ee13-4721-b2fe-091b428a14bd-var-log\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657181 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3114c460-eb74-48a9-bf0c-d32fe63a71be-combined-ca-bundle\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657211 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7c9g\" (UniqueName: \"kubernetes.io/projected/3114c460-eb74-48a9-bf0c-d32fe63a71be-kube-api-access-f7c9g\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657232 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d4cd0f86-ee13-4721-b2fe-091b428a14bd-var-lib\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657292 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mnrb\" (UniqueName: \"kubernetes.io/projected/d4cd0f86-ee13-4721-b2fe-091b428a14bd-kube-api-access-7mnrb\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657311 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3114c460-eb74-48a9-bf0c-d32fe63a71be-scripts\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657331 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3114c460-eb74-48a9-bf0c-d32fe63a71be-var-log-ovn\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657362 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d4cd0f86-ee13-4721-b2fe-091b428a14bd-etc-ovs\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657499 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d4cd0f86-ee13-4721-b2fe-091b428a14bd-var-run\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657544 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3114c460-eb74-48a9-bf0c-d32fe63a71be-var-run\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657571 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/d4cd0f86-ee13-4721-b2fe-091b428a14bd-etc-ovs\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.657737 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d4cd0f86-ee13-4721-b2fe-091b428a14bd-var-lib\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.658168 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d4cd0f86-ee13-4721-b2fe-091b428a14bd-var-log\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.659052 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4cd0f86-ee13-4721-b2fe-091b428a14bd-scripts\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.659158 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3114c460-eb74-48a9-bf0c-d32fe63a71be-var-run-ovn\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.659230 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3114c460-eb74-48a9-bf0c-d32fe63a71be-var-log-ovn\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.660486 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3114c460-eb74-48a9-bf0c-d32fe63a71be-scripts\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.670530 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3114c460-eb74-48a9-bf0c-d32fe63a71be-combined-ca-bundle\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.681935 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/3114c460-eb74-48a9-bf0c-d32fe63a71be-ovn-controller-tls-certs\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.682213 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mnrb\" (UniqueName: \"kubernetes.io/projected/d4cd0f86-ee13-4721-b2fe-091b428a14bd-kube-api-access-7mnrb\") pod \"ovn-controller-ovs-zgw2f\" (UID: \"d4cd0f86-ee13-4721-b2fe-091b428a14bd\") " pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.684126 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7c9g\" (UniqueName: \"kubernetes.io/projected/3114c460-eb74-48a9-bf0c-d32fe63a71be-kube-api-access-f7c9g\") pod \"ovn-controller-dht7z\" (UID: \"3114c460-eb74-48a9-bf0c-d32fe63a71be\") " pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.729329 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dht7z" Feb 16 11:23:55 crc kubenswrapper[4797]: I0216 11:23:55.756820 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.038173 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.040521 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.046536 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.046873 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.047039 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9bprx" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.047420 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.047606 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.051177 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.185179 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8e52214d-a751-4e7f-913e-064677d2fe1f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.185523 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e52214d-a751-4e7f-913e-064677d2fe1f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.185553 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e52214d-a751-4e7f-913e-064677d2fe1f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.185614 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e52214d-a751-4e7f-913e-064677d2fe1f-config\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.185636 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx8f7\" (UniqueName: \"kubernetes.io/projected/8e52214d-a751-4e7f-913e-064677d2fe1f-kube-api-access-sx8f7\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.185675 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ea3b0561-f5f5-4e82-ae97-29bf318bd4bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea3b0561-f5f5-4e82-ae97-29bf318bd4bd\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.185708 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e52214d-a751-4e7f-913e-064677d2fe1f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.185746 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e52214d-a751-4e7f-913e-064677d2fe1f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.287143 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8e52214d-a751-4e7f-913e-064677d2fe1f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.287211 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e52214d-a751-4e7f-913e-064677d2fe1f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.287242 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e52214d-a751-4e7f-913e-064677d2fe1f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.287278 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e52214d-a751-4e7f-913e-064677d2fe1f-config\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.287326 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx8f7\" (UniqueName: \"kubernetes.io/projected/8e52214d-a751-4e7f-913e-064677d2fe1f-kube-api-access-sx8f7\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.287375 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ea3b0561-f5f5-4e82-ae97-29bf318bd4bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea3b0561-f5f5-4e82-ae97-29bf318bd4bd\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.287425 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e52214d-a751-4e7f-913e-064677d2fe1f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.287480 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e52214d-a751-4e7f-913e-064677d2fe1f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.287999 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8e52214d-a751-4e7f-913e-064677d2fe1f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.288408 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e52214d-a751-4e7f-913e-064677d2fe1f-config\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.288984 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e52214d-a751-4e7f-913e-064677d2fe1f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.292221 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e52214d-a751-4e7f-913e-064677d2fe1f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.292493 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e52214d-a751-4e7f-913e-064677d2fe1f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.295558 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.295621 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ea3b0561-f5f5-4e82-ae97-29bf318bd4bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea3b0561-f5f5-4e82-ae97-29bf318bd4bd\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/bad75395bd51305a79d7cc634127fb3b016323cc6114de0d414f648804b8ff87/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.296557 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e52214d-a751-4e7f-913e-064677d2fe1f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.310698 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx8f7\" (UniqueName: \"kubernetes.io/projected/8e52214d-a751-4e7f-913e-064677d2fe1f-kube-api-access-sx8f7\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.329030 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ea3b0561-f5f5-4e82-ae97-29bf318bd4bd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ea3b0561-f5f5-4e82-ae97-29bf318bd4bd\") pod \"ovsdbserver-sb-0\" (UID: \"8e52214d-a751-4e7f-913e-064677d2fe1f\") " pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:57 crc kubenswrapper[4797]: I0216 11:23:57.392115 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 11:23:58 crc kubenswrapper[4797]: I0216 11:23:58.704131 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9qgrb"] Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.322716 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.324570 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.327489 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.327559 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-67pjq" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.327504 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.327936 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.345895 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.417839 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a782b16e-c29b-4d0c-ae20-23e2822d8e02-config\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.418178 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a782b16e-c29b-4d0c-ae20-23e2822d8e02-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.418337 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxmcw\" (UniqueName: \"kubernetes.io/projected/a782b16e-c29b-4d0c-ae20-23e2822d8e02-kube-api-access-vxmcw\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.418461 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f85423c2-8b6e-48d4-a06a-60c53fdc77d0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f85423c2-8b6e-48d4-a06a-60c53fdc77d0\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.418604 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a782b16e-c29b-4d0c-ae20-23e2822d8e02-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.418737 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a782b16e-c29b-4d0c-ae20-23e2822d8e02-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.418962 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a782b16e-c29b-4d0c-ae20-23e2822d8e02-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.419103 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a782b16e-c29b-4d0c-ae20-23e2822d8e02-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.521310 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f85423c2-8b6e-48d4-a06a-60c53fdc77d0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f85423c2-8b6e-48d4-a06a-60c53fdc77d0\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.521688 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a782b16e-c29b-4d0c-ae20-23e2822d8e02-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.521724 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a782b16e-c29b-4d0c-ae20-23e2822d8e02-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.521861 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a782b16e-c29b-4d0c-ae20-23e2822d8e02-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.521903 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a782b16e-c29b-4d0c-ae20-23e2822d8e02-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.521936 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a782b16e-c29b-4d0c-ae20-23e2822d8e02-config\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.521966 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a782b16e-c29b-4d0c-ae20-23e2822d8e02-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.521996 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxmcw\" (UniqueName: \"kubernetes.io/projected/a782b16e-c29b-4d0c-ae20-23e2822d8e02-kube-api-access-vxmcw\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.523319 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a782b16e-c29b-4d0c-ae20-23e2822d8e02-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.528897 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a782b16e-c29b-4d0c-ae20-23e2822d8e02-config\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.529733 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a782b16e-c29b-4d0c-ae20-23e2822d8e02-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.530426 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a782b16e-c29b-4d0c-ae20-23e2822d8e02-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.531963 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.531994 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f85423c2-8b6e-48d4-a06a-60c53fdc77d0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f85423c2-8b6e-48d4-a06a-60c53fdc77d0\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d653a65541cc4fcdf5d5ca1d002f26b553119799ffaf665e55107cfc95d6a74b/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.534038 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a782b16e-c29b-4d0c-ae20-23e2822d8e02-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.543234 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a782b16e-c29b-4d0c-ae20-23e2822d8e02-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.557472 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxmcw\" (UniqueName: \"kubernetes.io/projected/a782b16e-c29b-4d0c-ae20-23e2822d8e02-kube-api-access-vxmcw\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.597620 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f85423c2-8b6e-48d4-a06a-60c53fdc77d0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f85423c2-8b6e-48d4-a06a-60c53fdc77d0\") pod \"ovsdbserver-nb-0\" (UID: \"a782b16e-c29b-4d0c-ae20-23e2822d8e02\") " pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: E0216 11:23:59.611902 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 11:23:59 crc kubenswrapper[4797]: E0216 11:23:59.612121 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtmk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-5qd52_openstack(15b7ee09-6548-4f55-b125-65be5d58fcba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 11:23:59 crc kubenswrapper[4797]: E0216 11:23:59.614850 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" podUID="15b7ee09-6548-4f55-b125-65be5d58fcba" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.651470 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 11:23:59 crc kubenswrapper[4797]: E0216 11:23:59.782542 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 11:23:59 crc kubenswrapper[4797]: E0216 11:23:59.782698 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dqwtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-bw5r7_openstack(0944b587-50ee-4d4f-93e2-b84ad7cdce7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 11:23:59 crc kubenswrapper[4797]: E0216 11:23:59.784740 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-bw5r7" podUID="0944b587-50ee-4d4f-93e2-b84ad7cdce7b" Feb 16 11:23:59 crc kubenswrapper[4797]: I0216 11:23:59.860330 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 11:23:59 crc kubenswrapper[4797]: W0216 11:23:59.892679 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08b607dd_023c_4050_87d5_58f8f7f1714a.slice/crio-749e41356de2f88c7923564ee1eb03b1dea6b5b3aef726a60e54353972fefce9 WatchSource:0}: Error finding container 749e41356de2f88c7923564ee1eb03b1dea6b5b3aef726a60e54353972fefce9: Status 404 returned error can't find the container with id 749e41356de2f88c7923564ee1eb03b1dea6b5b3aef726a60e54353972fefce9 Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.044205 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.069764 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 11:24:00 crc kubenswrapper[4797]: W0216 11:24:00.071107 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40b82cbf_8ce3_45e9_a87e_a96cbe83488c.slice/crio-c2c11ee295d29bb1de2b7b8357f42c9f9be734089c4ebe3702ceaedf48f05828 WatchSource:0}: Error finding container c2c11ee295d29bb1de2b7b8357f42c9f9be734089c4ebe3702ceaedf48f05828: Status 404 returned error can't find the container with id c2c11ee295d29bb1de2b7b8357f42c9f9be734089c4ebe3702ceaedf48f05828 Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.214532 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" event={"ID":"d66691ed-2117-49d8-b5fd-5c5281295b31","Type":"ContainerStarted","Data":"76ed211281d31c130011ba6ced151212285d13ce9ceb9f57f06586846ee2fe43"} Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.219157 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.220978 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"08b607dd-023c-4050-87d5-58f8f7f1714a","Type":"ContainerStarted","Data":"749e41356de2f88c7923564ee1eb03b1dea6b5b3aef726a60e54353972fefce9"} Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.222025 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"40b82cbf-8ce3-45e9-a87e-a96cbe83488c","Type":"ContainerStarted","Data":"c2c11ee295d29bb1de2b7b8357f42c9f9be734089c4ebe3702ceaedf48f05828"} Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.223348 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1aa87d44-dc52-4398-a8f5-0adf7d33966e","Type":"ContainerStarted","Data":"8d8d513e72bf261759391f85f3015a786f61be6af8af3da905a21012c7dcd644"} Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.332418 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.345518 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 11:24:00 crc kubenswrapper[4797]: W0216 11:24:00.358126 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4acd6dc5_d9e3_4a05_aed4_ecc80733f365.slice/crio-d489a26e3da653d22222004e0f6140fa2349f8205e057f9e824aa9b12518ed6c WatchSource:0}: Error finding container d489a26e3da653d22222004e0f6140fa2349f8205e057f9e824aa9b12518ed6c: Status 404 returned error can't find the container with id d489a26e3da653d22222004e0f6140fa2349f8205e057f9e824aa9b12518ed6c Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.360914 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jqc8n"] Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.387268 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 11:24:00 crc kubenswrapper[4797]: W0216 11:24:00.388100 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06616d46_a0f2_4bd4_ae40_00c67b9bfb0e.slice/crio-0b35a5d4877a5371fbee2adfe58dc8ec2e1c011c058d6a7f0d029fc2a6314ec0 WatchSource:0}: Error finding container 0b35a5d4877a5371fbee2adfe58dc8ec2e1c011c058d6a7f0d029fc2a6314ec0: Status 404 returned error can't find the container with id 0b35a5d4877a5371fbee2adfe58dc8ec2e1c011c058d6a7f0d029fc2a6314ec0 Feb 16 11:24:00 crc kubenswrapper[4797]: W0216 11:24:00.414523 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod517059fd_92d8_4058_b426_5653912b7a41.slice/crio-7c93d86e32b0ed48575b81cb673846d42efedbb19455ab8d3d8d9036c7ba741e WatchSource:0}: Error finding container 7c93d86e32b0ed48575b81cb673846d42efedbb19455ab8d3d8d9036c7ba741e: Status 404 returned error can't find the container with id 7c93d86e32b0ed48575b81cb673846d42efedbb19455ab8d3d8d9036c7ba741e Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.519195 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-zgw2f"] Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.725502 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.742742 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bw5r7" Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.765363 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dht7z"] Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.779380 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.843771 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.862730 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15b7ee09-6548-4f55-b125-65be5d58fcba-dns-svc\") pod \"15b7ee09-6548-4f55-b125-65be5d58fcba\" (UID: \"15b7ee09-6548-4f55-b125-65be5d58fcba\") " Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.862876 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0944b587-50ee-4d4f-93e2-b84ad7cdce7b-config\") pod \"0944b587-50ee-4d4f-93e2-b84ad7cdce7b\" (UID: \"0944b587-50ee-4d4f-93e2-b84ad7cdce7b\") " Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.862986 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqwtj\" (UniqueName: \"kubernetes.io/projected/0944b587-50ee-4d4f-93e2-b84ad7cdce7b-kube-api-access-dqwtj\") pod \"0944b587-50ee-4d4f-93e2-b84ad7cdce7b\" (UID: \"0944b587-50ee-4d4f-93e2-b84ad7cdce7b\") " Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.863042 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtmk2\" (UniqueName: \"kubernetes.io/projected/15b7ee09-6548-4f55-b125-65be5d58fcba-kube-api-access-wtmk2\") pod \"15b7ee09-6548-4f55-b125-65be5d58fcba\" (UID: \"15b7ee09-6548-4f55-b125-65be5d58fcba\") " Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.863330 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15b7ee09-6548-4f55-b125-65be5d58fcba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "15b7ee09-6548-4f55-b125-65be5d58fcba" (UID: "15b7ee09-6548-4f55-b125-65be5d58fcba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.863518 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0944b587-50ee-4d4f-93e2-b84ad7cdce7b-config" (OuterVolumeSpecName: "config") pod "0944b587-50ee-4d4f-93e2-b84ad7cdce7b" (UID: "0944b587-50ee-4d4f-93e2-b84ad7cdce7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.863865 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b7ee09-6548-4f55-b125-65be5d58fcba-config\") pod \"15b7ee09-6548-4f55-b125-65be5d58fcba\" (UID: \"15b7ee09-6548-4f55-b125-65be5d58fcba\") " Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.864379 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0944b587-50ee-4d4f-93e2-b84ad7cdce7b-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.864405 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/15b7ee09-6548-4f55-b125-65be5d58fcba-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.864445 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15b7ee09-6548-4f55-b125-65be5d58fcba-config" (OuterVolumeSpecName: "config") pod "15b7ee09-6548-4f55-b125-65be5d58fcba" (UID: "15b7ee09-6548-4f55-b125-65be5d58fcba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.867375 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0944b587-50ee-4d4f-93e2-b84ad7cdce7b-kube-api-access-dqwtj" (OuterVolumeSpecName: "kube-api-access-dqwtj") pod "0944b587-50ee-4d4f-93e2-b84ad7cdce7b" (UID: "0944b587-50ee-4d4f-93e2-b84ad7cdce7b"). InnerVolumeSpecName "kube-api-access-dqwtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.867451 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b7ee09-6548-4f55-b125-65be5d58fcba-kube-api-access-wtmk2" (OuterVolumeSpecName: "kube-api-access-wtmk2") pod "15b7ee09-6548-4f55-b125-65be5d58fcba" (UID: "15b7ee09-6548-4f55-b125-65be5d58fcba"). InnerVolumeSpecName "kube-api-access-wtmk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.965874 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqwtj\" (UniqueName: \"kubernetes.io/projected/0944b587-50ee-4d4f-93e2-b84ad7cdce7b-kube-api-access-dqwtj\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.965926 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtmk2\" (UniqueName: \"kubernetes.io/projected/15b7ee09-6548-4f55-b125-65be5d58fcba-kube-api-access-wtmk2\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:00 crc kubenswrapper[4797]: I0216 11:24:00.965946 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b7ee09-6548-4f55-b125-65be5d58fcba-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.231965 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"517059fd-92d8-4058-b426-5653912b7a41","Type":"ContainerStarted","Data":"7c93d86e32b0ed48575b81cb673846d42efedbb19455ab8d3d8d9036c7ba741e"} Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.233407 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3bf0ec48-8b5b-4671-b213-f04c4e66ad9e","Type":"ContainerStarted","Data":"034c9c9e1cd9ea09d8236466a263a7082c660c0c772b748bf1ea0f4ea51c231d"} Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.234685 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zgw2f" event={"ID":"d4cd0f86-ee13-4721-b2fe-091b428a14bd","Type":"ContainerStarted","Data":"42fd28d8de927bc3b434c42f68636d23210ee2dab0dee972c1e656f7e439e6b0"} Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.235842 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-bw5r7" event={"ID":"0944b587-50ee-4d4f-93e2-b84ad7cdce7b","Type":"ContainerDied","Data":"bac3bfa1c71183283c2af90ecda82396b3ed76c095400b9a22a7b0f2878a44da"} Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.235859 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-bw5r7" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.241648 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" event={"ID":"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e","Type":"ContainerStarted","Data":"0b35a5d4877a5371fbee2adfe58dc8ec2e1c011c058d6a7f0d029fc2a6314ec0"} Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.243017 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ad8679cc-1167-4feb-a53a-49bded099628","Type":"ContainerStarted","Data":"cba2a4d8d13b0ed9bea19362bd4aaee07da9573e4566f68680fc0e3f71d251a6"} Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.244389 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dht7z" event={"ID":"3114c460-eb74-48a9-bf0c-d32fe63a71be","Type":"ContainerStarted","Data":"f0a42eb4a4d73b2a778cfb3f3fe291fd05bfad1ed381d76bbefbd88dfbb65f3a"} Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.245986 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.245989 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-5qd52" event={"ID":"15b7ee09-6548-4f55-b125-65be5d58fcba","Type":"ContainerDied","Data":"e4adc02a4d4d6aef67c91a571652ad441ca18e1914e00d69324f23a103737550"} Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.248108 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a782b16e-c29b-4d0c-ae20-23e2822d8e02","Type":"ContainerStarted","Data":"23258d8167743f01c470bf72d30c752083a967362ff9e2e1065e7332f8d0997a"} Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.250682 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"113930a6-db19-4e43-bd2b-75ef1d11c021","Type":"ContainerStarted","Data":"511f29667c83de2d4b714f2e976b94e8c362f7787df8b745e4f9df47ffc5fb8e"} Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.251804 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4acd6dc5-d9e3-4a05-aed4-ecc80733f365","Type":"ContainerStarted","Data":"d489a26e3da653d22222004e0f6140fa2349f8205e057f9e824aa9b12518ed6c"} Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.335187 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bw5r7"] Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.342467 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-bw5r7"] Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.362403 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5qd52"] Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.367349 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-5qd52"] Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.419281 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-9xdrm"] Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.420216 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.422545 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.436560 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9xdrm"] Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.579419 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89c74dc-5e73-48fb-9885-281d013b1e0f-config\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.579712 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fff7\" (UniqueName: \"kubernetes.io/projected/c89c74dc-5e73-48fb-9885-281d013b1e0f-kube-api-access-9fff7\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.579770 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c89c74dc-5e73-48fb-9885-281d013b1e0f-ovs-rundir\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.579805 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c89c74dc-5e73-48fb-9885-281d013b1e0f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.580083 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89c74dc-5e73-48fb-9885-281d013b1e0f-combined-ca-bundle\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.580202 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c89c74dc-5e73-48fb-9885-281d013b1e0f-ovn-rundir\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.649486 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9qgrb"] Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.683648 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89c74dc-5e73-48fb-9885-281d013b1e0f-combined-ca-bundle\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.683723 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c89c74dc-5e73-48fb-9885-281d013b1e0f-ovn-rundir\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.683767 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89c74dc-5e73-48fb-9885-281d013b1e0f-config\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.683799 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fff7\" (UniqueName: \"kubernetes.io/projected/c89c74dc-5e73-48fb-9885-281d013b1e0f-kube-api-access-9fff7\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.683855 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c89c74dc-5e73-48fb-9885-281d013b1e0f-ovs-rundir\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.683899 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c89c74dc-5e73-48fb-9885-281d013b1e0f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.690749 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c89c74dc-5e73-48fb-9885-281d013b1e0f-config\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.691051 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c89c74dc-5e73-48fb-9885-281d013b1e0f-ovn-rundir\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.691127 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c89c74dc-5e73-48fb-9885-281d013b1e0f-ovs-rundir\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.691455 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c89c74dc-5e73-48fb-9885-281d013b1e0f-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.700215 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-wtvrd"] Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.701015 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c89c74dc-5e73-48fb-9885-281d013b1e0f-combined-ca-bundle\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.703797 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.706514 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.712221 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-wtvrd"] Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.717640 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fff7\" (UniqueName: \"kubernetes.io/projected/c89c74dc-5e73-48fb-9885-281d013b1e0f-kube-api-access-9fff7\") pod \"ovn-controller-metrics-9xdrm\" (UID: \"c89c74dc-5e73-48fb-9885-281d013b1e0f\") " pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.742041 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-9xdrm" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.770695 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.785570 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-config\") pod \"dnsmasq-dns-7fd796d7df-wtvrd\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.785686 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqznj\" (UniqueName: \"kubernetes.io/projected/a24de11f-dc03-4dd2-9167-65577983742f-kube-api-access-fqznj\") pod \"dnsmasq-dns-7fd796d7df-wtvrd\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.785712 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-wtvrd\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.785756 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-wtvrd\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.887502 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-config\") pod \"dnsmasq-dns-7fd796d7df-wtvrd\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.887600 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqznj\" (UniqueName: \"kubernetes.io/projected/a24de11f-dc03-4dd2-9167-65577983742f-kube-api-access-fqznj\") pod \"dnsmasq-dns-7fd796d7df-wtvrd\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.887628 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-wtvrd\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.887674 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-wtvrd\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.889221 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-config\") pod \"dnsmasq-dns-7fd796d7df-wtvrd\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.907159 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-wtvrd\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.907263 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-wtvrd\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.945247 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqznj\" (UniqueName: \"kubernetes.io/projected/a24de11f-dc03-4dd2-9167-65577983742f-kube-api-access-fqznj\") pod \"dnsmasq-dns-7fd796d7df-wtvrd\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.998949 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0944b587-50ee-4d4f-93e2-b84ad7cdce7b" path="/var/lib/kubelet/pods/0944b587-50ee-4d4f-93e2-b84ad7cdce7b/volumes" Feb 16 11:24:01 crc kubenswrapper[4797]: I0216 11:24:01.999455 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15b7ee09-6548-4f55-b125-65be5d58fcba" path="/var/lib/kubelet/pods/15b7ee09-6548-4f55-b125-65be5d58fcba/volumes" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.068602 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.146515 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr"] Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.155962 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.166826 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.166998 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.167060 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.167526 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-s5mqh" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.167657 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.177603 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr"] Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.288165 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8e52214d-a751-4e7f-913e-064677d2fe1f","Type":"ContainerStarted","Data":"c3a65aa3ba334adb9ee6948d6fce6f240722fd482b96108dabe574dce3fdba0c"} Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.301878 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.301961 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.301995 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.302089 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx5h5\" (UniqueName: \"kubernetes.io/projected/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-kube-api-access-kx5h5\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.302153 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.328970 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll"] Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.330444 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.334025 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.334256 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.341655 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.342176 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll"] Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.403875 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.404193 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.404323 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr28w\" (UniqueName: \"kubernetes.io/projected/0d56c15d-4b5f-4eac-9a66-760bf878522b-kube-api-access-wr28w\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.404444 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/0d56c15d-4b5f-4eac-9a66-760bf878522b-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.404534 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx5h5\" (UniqueName: \"kubernetes.io/projected/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-kube-api-access-kx5h5\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.404680 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0d56c15d-4b5f-4eac-9a66-760bf878522b-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.404789 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d56c15d-4b5f-4eac-9a66-760bf878522b-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.404864 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.404944 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/0d56c15d-4b5f-4eac-9a66-760bf878522b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.405024 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.405128 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d56c15d-4b5f-4eac-9a66-760bf878522b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.408484 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.409962 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.415388 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-9xdrm"] Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.416277 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.429627 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.448702 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx5h5\" (UniqueName: \"kubernetes.io/projected/8f51ac14-22e0-4e95-901e-02cbad7ce1fe-kube-api-access-kx5h5\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-cnfpr\" (UID: \"8f51ac14-22e0-4e95-901e-02cbad7ce1fe\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.483748 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526"] Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.487216 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.494261 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.503244 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.503478 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.506311 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0d56c15d-4b5f-4eac-9a66-760bf878522b-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.506908 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d56c15d-4b5f-4eac-9a66-760bf878522b-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.507101 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/0d56c15d-4b5f-4eac-9a66-760bf878522b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.507305 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d56c15d-4b5f-4eac-9a66-760bf878522b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.507660 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr28w\" (UniqueName: \"kubernetes.io/projected/0d56c15d-4b5f-4eac-9a66-760bf878522b-kube-api-access-wr28w\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.507997 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/0d56c15d-4b5f-4eac-9a66-760bf878522b-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.527651 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/0d56c15d-4b5f-4eac-9a66-760bf878522b-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.541489 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0d56c15d-4b5f-4eac-9a66-760bf878522b-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.542131 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d56c15d-4b5f-4eac-9a66-760bf878522b-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.544199 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/0d56c15d-4b5f-4eac-9a66-760bf878522b-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.548521 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d56c15d-4b5f-4eac-9a66-760bf878522b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.548595 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526"] Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.578882 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr28w\" (UniqueName: \"kubernetes.io/projected/0d56c15d-4b5f-4eac-9a66-760bf878522b-kube-api-access-wr28w\") pod \"cloudkitty-lokistack-querier-58c84b5844-vzsll\" (UID: \"0d56c15d-4b5f-4eac-9a66-760bf878522b\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.609857 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgzch\" (UniqueName: \"kubernetes.io/projected/9f1d610c-b137-408a-9cd1-08f01ea36a6a-kube-api-access-fgzch\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.610189 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9f1d610c-b137-408a-9cd1-08f01ea36a6a-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.610297 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f1d610c-b137-408a-9cd1-08f01ea36a6a-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.610333 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9f1d610c-b137-408a-9cd1-08f01ea36a6a-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.610366 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f1d610c-b137-408a-9cd1-08f01ea36a6a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.658095 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j"] Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.660489 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.670209 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb"] Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.670432 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.670749 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.670889 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-gtb5b" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.670963 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.670895 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.671062 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.671378 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.672308 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.680810 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j"] Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.701178 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb"] Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.714021 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgzch\" (UniqueName: \"kubernetes.io/projected/9f1d610c-b137-408a-9cd1-08f01ea36a6a-kube-api-access-fgzch\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.714096 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9f1d610c-b137-408a-9cd1-08f01ea36a6a-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.714169 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f1d610c-b137-408a-9cd1-08f01ea36a6a-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.714260 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9f1d610c-b137-408a-9cd1-08f01ea36a6a-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.714288 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f1d610c-b137-408a-9cd1-08f01ea36a6a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.716880 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f1d610c-b137-408a-9cd1-08f01ea36a6a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.718389 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f1d610c-b137-408a-9cd1-08f01ea36a6a-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.722272 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/9f1d610c-b137-408a-9cd1-08f01ea36a6a-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.729985 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/9f1d610c-b137-408a-9cd1-08f01ea36a6a-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.750090 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgzch\" (UniqueName: \"kubernetes.io/projected/9f1d610c-b137-408a-9cd1-08f01ea36a6a-kube-api-access-fgzch\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526\" (UID: \"9f1d610c-b137-408a-9cd1-08f01ea36a6a\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.758396 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-wtvrd"] Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.815531 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.815606 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.815632 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.815657 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.815678 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/41e46e5d-912d-4425-baea-f40c0435997b-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.815820 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d934cad8-4584-4bf1-992c-37a3751d682e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.815880 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.816001 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/41e46e5d-912d-4425-baea-f40c0435997b-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.816056 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.816097 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl4dh\" (UniqueName: \"kubernetes.io/projected/41e46e5d-912d-4425-baea-f40c0435997b-kube-api-access-gl4dh\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.816141 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d934cad8-4584-4bf1-992c-37a3751d682e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.816171 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.816206 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.816230 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.816256 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q9zz\" (UniqueName: \"kubernetes.io/projected/d934cad8-4584-4bf1-992c-37a3751d682e-kube-api-access-7q9zz\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.816275 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d934cad8-4584-4bf1-992c-37a3751d682e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.816365 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.816420 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/41e46e5d-912d-4425-baea-f40c0435997b-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.845946 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.874383 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918503 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl4dh\" (UniqueName: \"kubernetes.io/projected/41e46e5d-912d-4425-baea-f40c0435997b-kube-api-access-gl4dh\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918554 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d934cad8-4584-4bf1-992c-37a3751d682e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918640 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918676 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918705 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918734 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q9zz\" (UniqueName: \"kubernetes.io/projected/d934cad8-4584-4bf1-992c-37a3751d682e-kube-api-access-7q9zz\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918761 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d934cad8-4584-4bf1-992c-37a3751d682e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918788 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918816 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/41e46e5d-912d-4425-baea-f40c0435997b-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918862 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918903 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918928 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918956 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.918977 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/41e46e5d-912d-4425-baea-f40c0435997b-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.919019 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d934cad8-4584-4bf1-992c-37a3751d682e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.919107 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.919184 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/41e46e5d-912d-4425-baea-f40c0435997b-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.919236 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.920394 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.920539 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.920712 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.921231 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.921612 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.923308 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.923414 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.923670 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d934cad8-4584-4bf1-992c-37a3751d682e-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.924424 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.925070 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41e46e5d-912d-4425-baea-f40c0435997b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.941070 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/41e46e5d-912d-4425-baea-f40c0435997b-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.942080 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/41e46e5d-912d-4425-baea-f40c0435997b-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.942732 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q9zz\" (UniqueName: \"kubernetes.io/projected/d934cad8-4584-4bf1-992c-37a3751d682e-kube-api-access-7q9zz\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.946377 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/d934cad8-4584-4bf1-992c-37a3751d682e-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.948764 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl4dh\" (UniqueName: \"kubernetes.io/projected/41e46e5d-912d-4425-baea-f40c0435997b-kube-api-access-gl4dh\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.952307 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/d934cad8-4584-4bf1-992c-37a3751d682e-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.971675 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/41e46e5d-912d-4425-baea-f40c0435997b-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-7kt7j\" (UID: \"41e46e5d-912d-4425-baea-f40c0435997b\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:02 crc kubenswrapper[4797]: I0216 11:24:02.972666 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/d934cad8-4584-4bf1-992c-37a3751d682e-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-s2jlb\" (UID: \"d934cad8-4584-4bf1-992c-37a3751d682e\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.057636 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.086352 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.221340 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr"] Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.300288 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9xdrm" event={"ID":"c89c74dc-5e73-48fb-9885-281d013b1e0f","Type":"ContainerStarted","Data":"949045d81c3c4c3430de2b6e8af3dff669b82c69b96dd6735435828656ea7af5"} Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.304346 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.306119 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.309469 4797 generic.go:334] "Generic (PLEG): container finished" podID="d66691ed-2117-49d8-b5fd-5c5281295b31" containerID="589470b5d975317df81a130c59ad37bebaaebc96059155dd72db41ce609c62a6" exitCode=0 Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.309553 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" event={"ID":"d66691ed-2117-49d8-b5fd-5c5281295b31","Type":"ContainerDied","Data":"589470b5d975317df81a130c59ad37bebaaebc96059155dd72db41ce609c62a6"} Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.309489 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.309886 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.312196 4797 generic.go:334] "Generic (PLEG): container finished" podID="06616d46-a0f2-4bd4-ae40-00c67b9bfb0e" containerID="706ace2b3da3c64a059ba35bf55c1a6796b13a0571d4ba22b3cf2b24540e349e" exitCode=0 Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.313535 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" event={"ID":"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e","Type":"ContainerDied","Data":"706ace2b3da3c64a059ba35bf55c1a6796b13a0571d4ba22b3cf2b24540e349e"} Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.313603 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.316476 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" event={"ID":"a24de11f-dc03-4dd2-9167-65577983742f","Type":"ContainerStarted","Data":"3df887c4bcfaae892a1c45a89f9f4d11c8630cc307c99a700bcf234b804e93ed"} Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.416748 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.418205 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.420480 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.421303 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.429828 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.429875 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwrrb\" (UniqueName: \"kubernetes.io/projected/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-kube-api-access-dwrrb\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.429899 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.429923 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.429954 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.429976 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.430024 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.430065 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.431892 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.521525 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.522928 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.525312 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.525478 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531497 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531540 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531636 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531656 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531683 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwrrb\" (UniqueName: \"kubernetes.io/projected/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-kube-api-access-dwrrb\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531705 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531729 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531747 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531766 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531792 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531811 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531836 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531862 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531896 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.531917 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssjrq\" (UniqueName: \"kubernetes.io/projected/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-kube-api-access-ssjrq\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.532647 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.533167 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.533260 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.534758 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.537995 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.540726 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.550750 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwrrb\" (UniqueName: \"kubernetes.io/projected/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-kube-api-access-dwrrb\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.553445 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.557978 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.559042 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/4ab6c5d9-8717-4b1b-8d13-6eb03e52a080-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.588531 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.633018 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d41ad10-514c-46f6-991f-1d4599322401-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.633087 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.633159 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.633223 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.633255 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.633419 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.633486 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/8d41ad10-514c-46f6-991f-1d4599322401-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.633536 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.634172 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.636994 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.657632 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/8d41ad10-514c-46f6-991f-1d4599322401-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.657689 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.657727 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d41ad10-514c-46f6-991f-1d4599322401-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.657783 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfn8h\" (UniqueName: \"kubernetes.io/projected/8d41ad10-514c-46f6-991f-1d4599322401-kube-api-access-zfn8h\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.657876 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssjrq\" (UniqueName: \"kubernetes.io/projected/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-kube-api-access-ssjrq\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.657923 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.657956 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.657958 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/8d41ad10-514c-46f6-991f-1d4599322401-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.658439 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.662652 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.663374 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.676381 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.695257 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssjrq\" (UniqueName: \"kubernetes.io/projected/ddc54c65-b3e8-4bb2-a16a-81a2297b5222-kube-api-access-ssjrq\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"ddc54c65-b3e8-4bb2-a16a-81a2297b5222\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.738386 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.759517 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/8d41ad10-514c-46f6-991f-1d4599322401-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.759652 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/8d41ad10-514c-46f6-991f-1d4599322401-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.759688 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d41ad10-514c-46f6-991f-1d4599322401-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.759734 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfn8h\" (UniqueName: \"kubernetes.io/projected/8d41ad10-514c-46f6-991f-1d4599322401-kube-api-access-zfn8h\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.759780 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/8d41ad10-514c-46f6-991f-1d4599322401-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.759842 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d41ad10-514c-46f6-991f-1d4599322401-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.759872 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.760278 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.761660 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d41ad10-514c-46f6-991f-1d4599322401-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.762732 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d41ad10-514c-46f6-991f-1d4599322401-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.764639 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/8d41ad10-514c-46f6-991f-1d4599322401-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.765111 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/8d41ad10-514c-46f6-991f-1d4599322401-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.765859 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/8d41ad10-514c-46f6-991f-1d4599322401-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.790547 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.790927 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfn8h\" (UniqueName: \"kubernetes.io/projected/8d41ad10-514c-46f6-991f-1d4599322401-kube-api-access-zfn8h\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"8d41ad10-514c-46f6-991f-1d4599322401\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.793277 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.861104 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66691ed-2117-49d8-b5fd-5c5281295b31-dns-svc\") pod \"d66691ed-2117-49d8-b5fd-5c5281295b31\" (UID: \"d66691ed-2117-49d8-b5fd-5c5281295b31\") " Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.861315 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66691ed-2117-49d8-b5fd-5c5281295b31-config\") pod \"d66691ed-2117-49d8-b5fd-5c5281295b31\" (UID: \"d66691ed-2117-49d8-b5fd-5c5281295b31\") " Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.861492 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbrvm\" (UniqueName: \"kubernetes.io/projected/d66691ed-2117-49d8-b5fd-5c5281295b31-kube-api-access-jbrvm\") pod \"d66691ed-2117-49d8-b5fd-5c5281295b31\" (UID: \"d66691ed-2117-49d8-b5fd-5c5281295b31\") " Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.867366 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66691ed-2117-49d8-b5fd-5c5281295b31-kube-api-access-jbrvm" (OuterVolumeSpecName: "kube-api-access-jbrvm") pod "d66691ed-2117-49d8-b5fd-5c5281295b31" (UID: "d66691ed-2117-49d8-b5fd-5c5281295b31"). InnerVolumeSpecName "kube-api-access-jbrvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.880525 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66691ed-2117-49d8-b5fd-5c5281295b31-config" (OuterVolumeSpecName: "config") pod "d66691ed-2117-49d8-b5fd-5c5281295b31" (UID: "d66691ed-2117-49d8-b5fd-5c5281295b31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.885427 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66691ed-2117-49d8-b5fd-5c5281295b31-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d66691ed-2117-49d8-b5fd-5c5281295b31" (UID: "d66691ed-2117-49d8-b5fd-5c5281295b31"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.963587 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d66691ed-2117-49d8-b5fd-5c5281295b31-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.963628 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbrvm\" (UniqueName: \"kubernetes.io/projected/d66691ed-2117-49d8-b5fd-5c5281295b31-kube-api-access-jbrvm\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.963645 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d66691ed-2117-49d8-b5fd-5c5281295b31-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:03 crc kubenswrapper[4797]: I0216 11:24:03.977026 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:04 crc kubenswrapper[4797]: I0216 11:24:04.137045 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j"] Feb 16 11:24:04 crc kubenswrapper[4797]: I0216 11:24:04.149648 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb"] Feb 16 11:24:04 crc kubenswrapper[4797]: I0216 11:24:04.314519 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526"] Feb 16 11:24:04 crc kubenswrapper[4797]: I0216 11:24:04.331438 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" event={"ID":"8f51ac14-22e0-4e95-901e-02cbad7ce1fe","Type":"ContainerStarted","Data":"f0245416ad8389b505ceecdae6542892c1258bbe5a2b22982412afed786492fb"} Feb 16 11:24:04 crc kubenswrapper[4797]: I0216 11:24:04.333144 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" event={"ID":"d934cad8-4584-4bf1-992c-37a3751d682e","Type":"ContainerStarted","Data":"2872efe4f6a7542772890606c7bcca38aa9a4287400088d71feabf69178af34e"} Feb 16 11:24:04 crc kubenswrapper[4797]: I0216 11:24:04.348203 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" event={"ID":"d66691ed-2117-49d8-b5fd-5c5281295b31","Type":"ContainerDied","Data":"76ed211281d31c130011ba6ced151212285d13ce9ceb9f57f06586846ee2fe43"} Feb 16 11:24:04 crc kubenswrapper[4797]: I0216 11:24:04.348266 4797 scope.go:117] "RemoveContainer" containerID="589470b5d975317df81a130c59ad37bebaaebc96059155dd72db41ce609c62a6" Feb 16 11:24:04 crc kubenswrapper[4797]: I0216 11:24:04.348267 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-9qgrb" Feb 16 11:24:04 crc kubenswrapper[4797]: I0216 11:24:04.354385 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" event={"ID":"41e46e5d-912d-4425-baea-f40c0435997b","Type":"ContainerStarted","Data":"cbe0f3c78178ffeccf307f64ad5cf052be5759161e408c475a389cc391c88bae"} Feb 16 11:24:04 crc kubenswrapper[4797]: I0216 11:24:04.397426 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9qgrb"] Feb 16 11:24:04 crc kubenswrapper[4797]: I0216 11:24:04.403700 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-9qgrb"] Feb 16 11:24:04 crc kubenswrapper[4797]: I0216 11:24:04.440035 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll"] Feb 16 11:24:04 crc kubenswrapper[4797]: W0216 11:24:04.456294 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f1d610c_b137_408a_9cd1_08f01ea36a6a.slice/crio-a2dd624ced001b1467a777a2ca745d226ecb5427a778c2b9615c4afe7ecffb3f WatchSource:0}: Error finding container a2dd624ced001b1467a777a2ca745d226ecb5427a778c2b9615c4afe7ecffb3f: Status 404 returned error can't find the container with id a2dd624ced001b1467a777a2ca745d226ecb5427a778c2b9615c4afe7ecffb3f Feb 16 11:24:04 crc kubenswrapper[4797]: I0216 11:24:04.968198 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 11:24:05 crc kubenswrapper[4797]: I0216 11:24:05.033613 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 11:24:05 crc kubenswrapper[4797]: I0216 11:24:05.339603 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 11:24:05 crc kubenswrapper[4797]: I0216 11:24:05.376676 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" event={"ID":"0d56c15d-4b5f-4eac-9a66-760bf878522b","Type":"ContainerStarted","Data":"cd2c375e34eb0290e127ce90868469b8dfe38bcd96f442619211bb1c5d451201"} Feb 16 11:24:05 crc kubenswrapper[4797]: I0216 11:24:05.380781 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" event={"ID":"9f1d610c-b137-408a-9cd1-08f01ea36a6a","Type":"ContainerStarted","Data":"a2dd624ced001b1467a777a2ca745d226ecb5427a778c2b9615c4afe7ecffb3f"} Feb 16 11:24:06 crc kubenswrapper[4797]: I0216 11:24:06.021074 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d66691ed-2117-49d8-b5fd-5c5281295b31" path="/var/lib/kubelet/pods/d66691ed-2117-49d8-b5fd-5c5281295b31/volumes" Feb 16 11:24:06 crc kubenswrapper[4797]: I0216 11:24:06.406824 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080","Type":"ContainerStarted","Data":"c7acfc1716bf146aabb23309ba11d421b1e1e4b35e2b20969756b37ac0b92759"} Feb 16 11:24:06 crc kubenswrapper[4797]: I0216 11:24:06.408600 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"ddc54c65-b3e8-4bb2-a16a-81a2297b5222","Type":"ContainerStarted","Data":"8cdabb3af30649061ec98992524d54b2de1a8168eb5b96c3457e121f7b6a2aba"} Feb 16 11:24:06 crc kubenswrapper[4797]: W0216 11:24:06.445398 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d41ad10_514c_46f6_991f_1d4599322401.slice/crio-020551fbbed0a381d2139e11572bfa2116aaa37cca55165ff3b1062371c6649d WatchSource:0}: Error finding container 020551fbbed0a381d2139e11572bfa2116aaa37cca55165ff3b1062371c6649d: Status 404 returned error can't find the container with id 020551fbbed0a381d2139e11572bfa2116aaa37cca55165ff3b1062371c6649d Feb 16 11:24:07 crc kubenswrapper[4797]: I0216 11:24:07.415287 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"8d41ad10-514c-46f6-991f-1d4599322401","Type":"ContainerStarted","Data":"020551fbbed0a381d2139e11572bfa2116aaa37cca55165ff3b1062371c6649d"} Feb 16 11:24:14 crc kubenswrapper[4797]: E0216 11:24:14.891707 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Feb 16 11:24:14 crc kubenswrapper[4797]: E0216 11:24:14.892210 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/alertmanager/config/alertmanager.yaml.gz --config-envsubst-file=/etc/alertmanager/config_out/alertmanager.env.yaml --watched-dir=/etc/alertmanager/config],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:-1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wwmsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-metric-storage-0_openstack(ad8679cc-1167-4feb-a53a-49bded099628): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 11:24:14 crc kubenswrapper[4797]: E0216 11:24:14.893615 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/alertmanager-metric-storage-0" podUID="ad8679cc-1167-4feb-a53a-49bded099628" Feb 16 11:24:14 crc kubenswrapper[4797]: E0216 11:24:14.993818 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/alertmanager-metric-storage-0" podUID="ad8679cc-1167-4feb-a53a-49bded099628" Feb 16 11:24:15 crc kubenswrapper[4797]: E0216 11:24:15.053198 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Feb 16 11:24:15 crc kubenswrapper[4797]: E0216 11:24:15.053392 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdp9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(113930a6-db19-4e43-bd2b-75ef1d11c021): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 11:24:15 crc kubenswrapper[4797]: E0216 11:24:15.054822 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" Feb 16 11:24:15 crc kubenswrapper[4797]: E0216 11:24:15.584233 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Feb 16 11:24:15 crc kubenswrapper[4797]: E0216 11:24:15.584561 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n547hf4h659h547h64dh5c6h77h68dh5d6h689h564h94h67fh5dfh564h99h684h68h644hd7h54bhcdh695h547h5fchcfh57h5b9h5cbhcbh57hd6q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdgps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(517059fd-92d8-4058-b426-5653912b7a41): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 11:24:15 crc kubenswrapper[4797]: E0216 11:24:15.585902 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="517059fd-92d8-4058-b426-5653912b7a41" Feb 16 11:24:16 crc kubenswrapper[4797]: E0216 11:24:16.004569 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="517059fd-92d8-4058-b426-5653912b7a41" Feb 16 11:24:16 crc kubenswrapper[4797]: E0216 11:24:16.005054 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" Feb 16 11:24:16 crc kubenswrapper[4797]: E0216 11:24:16.928925 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981" Feb 16 11:24:16 crc kubenswrapper[4797]: E0216 11:24:16.929211 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-distributor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=distributor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-distributor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kx5h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-distributor-585d9bcbc-cnfpr_openstack(8f51ac14-22e0-4e95-901e-02cbad7ce1fe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 11:24:16 crc kubenswrapper[4797]: E0216 11:24:16.930460 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" podUID="8f51ac14-22e0-4e95-901e-02cbad7ce1fe" Feb 16 11:24:17 crc kubenswrapper[4797]: E0216 11:24:17.012012 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-distributor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" podUID="8f51ac14-22e0-4e95-901e-02cbad7ce1fe" Feb 16 11:24:17 crc kubenswrapper[4797]: E0216 11:24:17.091546 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified" Feb 16 11:24:17 crc kubenswrapper[4797]: E0216 11:24:17.091830 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n95h5bbh59bh595h596h5cfh57dhdh5bh545h5b6h55fh9ch648h584hc7h695hfbh9ch5ddh588h74h7bh5b9h564h5d8h599hd7h54ch4hc4h5bcq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxmcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(a782b16e-c29b-4d0c-ae20-23e2822d8e02): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 11:24:19 crc kubenswrapper[4797]: E0216 11:24:19.223899 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 16 11:24:19 crc kubenswrapper[4797]: E0216 11:24:19.224954 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ljnj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(4acd6dc5-d9e3-4a05-aed4-ecc80733f365): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 11:24:19 crc kubenswrapper[4797]: E0216 11:24:19.226398 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="4acd6dc5-d9e3-4a05-aed4-ecc80733f365" Feb 16 11:24:19 crc kubenswrapper[4797]: E0216 11:24:19.234454 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 16 11:24:19 crc kubenswrapper[4797]: E0216 11:24:19.234658 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tvnqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(08b607dd-023c-4050-87d5-58f8f7f1714a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 11:24:19 crc kubenswrapper[4797]: E0216 11:24:19.236641 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="08b607dd-023c-4050-87d5-58f8f7f1714a" Feb 16 11:24:19 crc kubenswrapper[4797]: E0216 11:24:19.941166 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Feb 16 11:24:19 crc kubenswrapper[4797]: E0216 11:24:19.941370 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55bh55ch6fh6hbfh689hc6h58h66fh5f4h5b9h5bbh67h5f6h5d6h59fh658h54ch5f5hf8h5bbh86h5bdh9chfch544h567h54dh589h5ddh687h68fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f7c9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-dht7z_openstack(3114c460-eb74-48a9-bf0c-d32fe63a71be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 11:24:19 crc kubenswrapper[4797]: E0216 11:24:19.942650 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-dht7z" podUID="3114c460-eb74-48a9-bf0c-d32fe63a71be" Feb 16 11:24:20 crc kubenswrapper[4797]: E0216 11:24:20.035674 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="08b607dd-023c-4050-87d5-58f8f7f1714a" Feb 16 11:24:20 crc kubenswrapper[4797]: E0216 11:24:20.035682 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-dht7z" podUID="3114c460-eb74-48a9-bf0c-d32fe63a71be" Feb 16 11:24:20 crc kubenswrapper[4797]: E0216 11:24:20.035757 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="4acd6dc5-d9e3-4a05-aed4-ecc80733f365" Feb 16 11:24:20 crc kubenswrapper[4797]: E0216 11:24:20.387160 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 16 11:24:20 crc kubenswrapper[4797]: E0216 11:24:20.387217 4797 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 16 11:24:20 crc kubenswrapper[4797]: E0216 11:24:20.387355 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qn8f5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(3bf0ec48-8b5b-4671-b213-f04c4e66ad9e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 11:24:20 crc kubenswrapper[4797]: E0216 11:24:20.388875 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="3bf0ec48-8b5b-4671-b213-f04c4e66ad9e" Feb 16 11:24:20 crc kubenswrapper[4797]: E0216 11:24:20.565436 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified" Feb 16 11:24:20 crc kubenswrapper[4797]: E0216 11:24:20.565633 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n669hb4h76h558hb8h585h546h65h685h65bh696h54ch57fh56fh8ch5bh55dhfh98h8fh66hdch58dh549h85h55dh664h9fh696h658hb5h555q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sx8f7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(8e52214d-a751-4e7f-913e-064677d2fe1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 11:24:21 crc kubenswrapper[4797]: E0216 11:24:21.043821 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="3bf0ec48-8b5b-4671-b213-f04c4e66ad9e" Feb 16 11:24:21 crc kubenswrapper[4797]: E0216 11:24:21.624403 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="8e52214d-a751-4e7f-913e-064677d2fe1f" Feb 16 11:24:22 crc kubenswrapper[4797]: E0216 11:24:22.023898 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="a782b16e-c29b-4d0c-ae20-23e2822d8e02" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.057916 4797 generic.go:334] "Generic (PLEG): container finished" podID="a24de11f-dc03-4dd2-9167-65577983742f" containerID="e0360f5d82c08f2ff6b33bb8edd9753846606e6951e8ed47e4d59b199b73510e" exitCode=0 Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.057988 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" event={"ID":"a24de11f-dc03-4dd2-9167-65577983742f","Type":"ContainerDied","Data":"e0360f5d82c08f2ff6b33bb8edd9753846606e6951e8ed47e4d59b199b73510e"} Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.060957 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" event={"ID":"9f1d610c-b137-408a-9cd1-08f01ea36a6a","Type":"ContainerStarted","Data":"81942fa4a599f1f326d38fb3a94dc51d4cbc58229fb3edb08649ccb89b8b1b04"} Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.061618 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.066259 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zgw2f" event={"ID":"d4cd0f86-ee13-4721-b2fe-091b428a14bd","Type":"ContainerStarted","Data":"8c3b626a961470406dd19837784861465949183b05ba7520c7f3de634be50385"} Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.070112 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"4ab6c5d9-8717-4b1b-8d13-6eb03e52a080","Type":"ContainerStarted","Data":"c0b0a0dfea270c749e1d7c050e56a6ae069129b9770679c67effd2bf84d4c320"} Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.070840 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.072146 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-9xdrm" event={"ID":"c89c74dc-5e73-48fb-9885-281d013b1e0f","Type":"ContainerStarted","Data":"2a803faa5d9d6ae99b86897eea34c0d9e0c078a33eac90db146366858371e540"} Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.078360 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8e52214d-a751-4e7f-913e-064677d2fe1f","Type":"ContainerStarted","Data":"0ec3ca8405c3d8b4a6ae7e29c31f10995d33eb290a78b0ae7713e6fb9f405b83"} Feb 16 11:24:22 crc kubenswrapper[4797]: E0216 11:24:22.082032 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="8e52214d-a751-4e7f-913e-064677d2fe1f" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.091099 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"ddc54c65-b3e8-4bb2-a16a-81a2297b5222","Type":"ContainerStarted","Data":"3bb0ffd332755bfc3f4e52a4ae15f4d44b791e5111f4f4e7011370cad2da3e06"} Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.091438 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.096247 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" podStartSLOduration=3.925284844 podStartE2EDuration="20.096232385s" podCreationTimestamp="2026-02-16 11:24:02 +0000 UTC" firstStartedPulling="2026-02-16 11:24:04.458939605 +0000 UTC m=+1039.179124585" lastFinishedPulling="2026-02-16 11:24:20.629887116 +0000 UTC m=+1055.350072126" observedRunningTime="2026-02-16 11:24:22.09494326 +0000 UTC m=+1056.815128240" watchObservedRunningTime="2026-02-16 11:24:22.096232385 +0000 UTC m=+1056.816417365" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.117419 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" event={"ID":"41e46e5d-912d-4425-baea-f40c0435997b","Type":"ContainerStarted","Data":"735e8c65f48aec1f0abe17e7accbf0385870c0e0b12c8e240680b3a74fcf3d28"} Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.120192 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.133249 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" event={"ID":"0d56c15d-4b5f-4eac-9a66-760bf878522b","Type":"ContainerStarted","Data":"ed8ac36f2050aeca6d81b7e8baa130e1e0ab9190d607bf4291a2d1d91ff25ecd"} Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.136969 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=5.151916852 podStartE2EDuration="20.136948096s" podCreationTimestamp="2026-02-16 11:24:02 +0000 UTC" firstStartedPulling="2026-02-16 11:24:05.645711395 +0000 UTC m=+1040.365896375" lastFinishedPulling="2026-02-16 11:24:20.630742639 +0000 UTC m=+1055.350927619" observedRunningTime="2026-02-16 11:24:22.121168956 +0000 UTC m=+1056.841353936" watchObservedRunningTime="2026-02-16 11:24:22.136948096 +0000 UTC m=+1056.857133096" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.149721 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.151679 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.152122 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" event={"ID":"d934cad8-4584-4bf1-992c-37a3751d682e","Type":"ContainerStarted","Data":"8bcb56f02ee5a0e78a54a674cf3d41d5d6470b21d594c8ae3eae3f8ee9ba720d"} Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.153054 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.157646 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a782b16e-c29b-4d0c-ae20-23e2822d8e02","Type":"ContainerStarted","Data":"b43eeda52a0335ddfc5e6e87f18e7309be2bda2bf0467e771ef2f98b2cf0ab47"} Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.160450 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"8d41ad10-514c-46f6-991f-1d4599322401","Type":"ContainerStarted","Data":"3d04f5e6f11c9617edcda678a748776eaed51bdfb76524d2113b33d0186a230c"} Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.161125 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:22 crc kubenswrapper[4797]: E0216 11:24:22.162900 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="a782b16e-c29b-4d0c-ae20-23e2822d8e02" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.164719 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" event={"ID":"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e","Type":"ContainerStarted","Data":"be9dc9e0103bfdd5dbbb02b4c60ffddb7a5f40c39755da3777a3c6770f842076"} Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.165030 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.174717 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.233921 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-9xdrm" podStartSLOduration=4.3447281 podStartE2EDuration="21.233897592s" podCreationTimestamp="2026-02-16 11:24:01 +0000 UTC" firstStartedPulling="2026-02-16 11:24:02.431265155 +0000 UTC m=+1037.151450135" lastFinishedPulling="2026-02-16 11:24:19.320434647 +0000 UTC m=+1054.040619627" observedRunningTime="2026-02-16 11:24:22.190001744 +0000 UTC m=+1056.910186724" watchObservedRunningTime="2026-02-16 11:24:22.233897592 +0000 UTC m=+1056.954082572" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.255812 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-s2jlb" podStartSLOduration=3.789711835 podStartE2EDuration="20.25579056s" podCreationTimestamp="2026-02-16 11:24:02 +0000 UTC" firstStartedPulling="2026-02-16 11:24:04.177818873 +0000 UTC m=+1038.898003853" lastFinishedPulling="2026-02-16 11:24:20.643897598 +0000 UTC m=+1055.364082578" observedRunningTime="2026-02-16 11:24:22.210331179 +0000 UTC m=+1056.930516159" watchObservedRunningTime="2026-02-16 11:24:22.25579056 +0000 UTC m=+1056.975975540" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.271173 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=6.120178867 podStartE2EDuration="20.271156219s" podCreationTimestamp="2026-02-16 11:24:02 +0000 UTC" firstStartedPulling="2026-02-16 11:24:06.453917162 +0000 UTC m=+1041.174102142" lastFinishedPulling="2026-02-16 11:24:20.604894514 +0000 UTC m=+1055.325079494" observedRunningTime="2026-02-16 11:24:22.246400834 +0000 UTC m=+1056.966585824" watchObservedRunningTime="2026-02-16 11:24:22.271156219 +0000 UTC m=+1056.991341199" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.348143 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-7kt7j" podStartSLOduration=3.9167087609999998 podStartE2EDuration="20.34811821s" podCreationTimestamp="2026-02-16 11:24:02 +0000 UTC" firstStartedPulling="2026-02-16 11:24:04.173497575 +0000 UTC m=+1038.893682555" lastFinishedPulling="2026-02-16 11:24:20.604906984 +0000 UTC m=+1055.325092004" observedRunningTime="2026-02-16 11:24:22.34189654 +0000 UTC m=+1057.062081510" watchObservedRunningTime="2026-02-16 11:24:22.34811821 +0000 UTC m=+1057.068303190" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.377040 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=5.33395754 podStartE2EDuration="20.377021858s" podCreationTimestamp="2026-02-16 11:24:02 +0000 UTC" firstStartedPulling="2026-02-16 11:24:05.650124795 +0000 UTC m=+1040.370309775" lastFinishedPulling="2026-02-16 11:24:20.693189113 +0000 UTC m=+1055.413374093" observedRunningTime="2026-02-16 11:24:22.365054702 +0000 UTC m=+1057.085239692" watchObservedRunningTime="2026-02-16 11:24:22.377021858 +0000 UTC m=+1057.097206838" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.389820 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" podStartSLOduration=35.392515507 podStartE2EDuration="37.389801807s" podCreationTimestamp="2026-02-16 11:23:45 +0000 UTC" firstStartedPulling="2026-02-16 11:24:00.393395308 +0000 UTC m=+1035.113580288" lastFinishedPulling="2026-02-16 11:24:02.390681608 +0000 UTC m=+1037.110866588" observedRunningTime="2026-02-16 11:24:22.382126048 +0000 UTC m=+1057.102311028" watchObservedRunningTime="2026-02-16 11:24:22.389801807 +0000 UTC m=+1057.109986787" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.408377 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" podStartSLOduration=4.460490451 podStartE2EDuration="20.408358143s" podCreationTimestamp="2026-02-16 11:24:02 +0000 UTC" firstStartedPulling="2026-02-16 11:24:04.657059132 +0000 UTC m=+1039.377244112" lastFinishedPulling="2026-02-16 11:24:20.604926794 +0000 UTC m=+1055.325111804" observedRunningTime="2026-02-16 11:24:22.405599718 +0000 UTC m=+1057.125784708" watchObservedRunningTime="2026-02-16 11:24:22.408358143 +0000 UTC m=+1057.128543123" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.445627 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jqc8n"] Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.490412 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fwg2v"] Feb 16 11:24:22 crc kubenswrapper[4797]: E0216 11:24:22.490855 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66691ed-2117-49d8-b5fd-5c5281295b31" containerName="init" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.490868 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66691ed-2117-49d8-b5fd-5c5281295b31" containerName="init" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.491044 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66691ed-2117-49d8-b5fd-5c5281295b31" containerName="init" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.492132 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.502168 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.518687 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fwg2v"] Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.567257 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4nxc\" (UniqueName: \"kubernetes.io/projected/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-kube-api-access-h4nxc\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.567366 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-config\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.567780 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.567853 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.567905 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.669803 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.669858 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.669878 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.669921 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4nxc\" (UniqueName: \"kubernetes.io/projected/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-kube-api-access-h4nxc\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.669963 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-config\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.670782 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.670821 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.670831 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-config\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.671374 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.705808 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4nxc\" (UniqueName: \"kubernetes.io/projected/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-kube-api-access-h4nxc\") pod \"dnsmasq-dns-86db49b7ff-fwg2v\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:22 crc kubenswrapper[4797]: I0216 11:24:22.817586 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:23 crc kubenswrapper[4797]: I0216 11:24:23.173212 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" event={"ID":"a24de11f-dc03-4dd2-9167-65577983742f","Type":"ContainerStarted","Data":"8642067c5aec6cefa1269421c715d2e29ff63e54bb392ee77aa9960cc80e228d"} Feb 16 11:24:23 crc kubenswrapper[4797]: I0216 11:24:23.174694 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:23 crc kubenswrapper[4797]: I0216 11:24:23.189839 4797 generic.go:334] "Generic (PLEG): container finished" podID="d4cd0f86-ee13-4721-b2fe-091b428a14bd" containerID="8c3b626a961470406dd19837784861465949183b05ba7520c7f3de634be50385" exitCode=0 Feb 16 11:24:23 crc kubenswrapper[4797]: I0216 11:24:23.189896 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zgw2f" event={"ID":"d4cd0f86-ee13-4721-b2fe-091b428a14bd","Type":"ContainerDied","Data":"8c3b626a961470406dd19837784861465949183b05ba7520c7f3de634be50385"} Feb 16 11:24:23 crc kubenswrapper[4797]: I0216 11:24:23.194255 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"40b82cbf-8ce3-45e9-a87e-a96cbe83488c","Type":"ContainerStarted","Data":"b30f5794a1569a833b29a4e2003c86c7568e0f5b3acb482d44119bb5e723c6ae"} Feb 16 11:24:23 crc kubenswrapper[4797]: I0216 11:24:23.198352 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1aa87d44-dc52-4398-a8f5-0adf7d33966e","Type":"ContainerStarted","Data":"f9f043d6115c5196475dc2af25329d9344140e068d8d356839f140959f075ccc"} Feb 16 11:24:23 crc kubenswrapper[4797]: E0216 11:24:23.200213 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="a782b16e-c29b-4d0c-ae20-23e2822d8e02" Feb 16 11:24:23 crc kubenswrapper[4797]: E0216 11:24:23.201408 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="8e52214d-a751-4e7f-913e-064677d2fe1f" Feb 16 11:24:23 crc kubenswrapper[4797]: I0216 11:24:23.215391 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" podStartSLOduration=22.215377388 podStartE2EDuration="22.215377388s" podCreationTimestamp="2026-02-16 11:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:24:23.212916482 +0000 UTC m=+1057.933101462" watchObservedRunningTime="2026-02-16 11:24:23.215377388 +0000 UTC m=+1057.935562368" Feb 16 11:24:23 crc kubenswrapper[4797]: I0216 11:24:23.306876 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fwg2v"] Feb 16 11:24:23 crc kubenswrapper[4797]: W0216 11:24:23.317535 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d02a6e0_fd01_49a4_80c1_3aa581fd0f58.slice/crio-bd3f6f1d58fb6e5d09e414c6af58cf028ad79ae299bb754a4bd17e9a0f9d7a2c WatchSource:0}: Error finding container bd3f6f1d58fb6e5d09e414c6af58cf028ad79ae299bb754a4bd17e9a0f9d7a2c: Status 404 returned error can't find the container with id bd3f6f1d58fb6e5d09e414c6af58cf028ad79ae299bb754a4bd17e9a0f9d7a2c Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.208329 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zgw2f" event={"ID":"d4cd0f86-ee13-4721-b2fe-091b428a14bd","Type":"ContainerStarted","Data":"9f142822382d3bf82f1fd80ccc98282694e1eb28ae31e52d2e6d5213e2a81b83"} Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.208640 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-zgw2f" event={"ID":"d4cd0f86-ee13-4721-b2fe-091b428a14bd","Type":"ContainerStarted","Data":"3f8288958dc42ea8f2bf66dc16b6c40bfcb3a3672015f4ca97d332ef771c089d"} Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.208673 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.208688 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.210120 4797 generic.go:334] "Generic (PLEG): container finished" podID="9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" containerID="963826670d47dcea583e19b2e8854f4d5613bf7b3abc1d12fb522cd7d76df6d7" exitCode=0 Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.210388 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" event={"ID":"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58","Type":"ContainerDied","Data":"963826670d47dcea583e19b2e8854f4d5613bf7b3abc1d12fb522cd7d76df6d7"} Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.210452 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" event={"ID":"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58","Type":"ContainerStarted","Data":"bd3f6f1d58fb6e5d09e414c6af58cf028ad79ae299bb754a4bd17e9a0f9d7a2c"} Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.211657 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" podUID="06616d46-a0f2-4bd4-ae40-00c67b9bfb0e" containerName="dnsmasq-dns" containerID="cri-o://be9dc9e0103bfdd5dbbb02b4c60ffddb7a5f40c39755da3777a3c6770f842076" gracePeriod=10 Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.240694 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-zgw2f" podStartSLOduration=10.021753685 podStartE2EDuration="29.240666661s" podCreationTimestamp="2026-02-16 11:23:55 +0000 UTC" firstStartedPulling="2026-02-16 11:24:00.545400126 +0000 UTC m=+1035.265585106" lastFinishedPulling="2026-02-16 11:24:19.764313102 +0000 UTC m=+1054.484498082" observedRunningTime="2026-02-16 11:24:24.228957731 +0000 UTC m=+1058.949142751" watchObservedRunningTime="2026-02-16 11:24:24.240666661 +0000 UTC m=+1058.960851691" Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.619729 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.808481 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-dns-svc\") pod \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\" (UID: \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\") " Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.809928 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-config\") pod \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\" (UID: \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\") " Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.810100 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfpgv\" (UniqueName: \"kubernetes.io/projected/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-kube-api-access-rfpgv\") pod \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\" (UID: \"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e\") " Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.815529 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-kube-api-access-rfpgv" (OuterVolumeSpecName: "kube-api-access-rfpgv") pod "06616d46-a0f2-4bd4-ae40-00c67b9bfb0e" (UID: "06616d46-a0f2-4bd4-ae40-00c67b9bfb0e"). InnerVolumeSpecName "kube-api-access-rfpgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.844765 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06616d46-a0f2-4bd4-ae40-00c67b9bfb0e" (UID: "06616d46-a0f2-4bd4-ae40-00c67b9bfb0e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.852262 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-config" (OuterVolumeSpecName: "config") pod "06616d46-a0f2-4bd4-ae40-00c67b9bfb0e" (UID: "06616d46-a0f2-4bd4-ae40-00c67b9bfb0e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.912856 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfpgv\" (UniqueName: \"kubernetes.io/projected/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-kube-api-access-rfpgv\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.912899 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:24 crc kubenswrapper[4797]: I0216 11:24:24.912913 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.221002 4797 generic.go:334] "Generic (PLEG): container finished" podID="06616d46-a0f2-4bd4-ae40-00c67b9bfb0e" containerID="be9dc9e0103bfdd5dbbb02b4c60ffddb7a5f40c39755da3777a3c6770f842076" exitCode=0 Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.221084 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.221084 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" event={"ID":"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e","Type":"ContainerDied","Data":"be9dc9e0103bfdd5dbbb02b4c60ffddb7a5f40c39755da3777a3c6770f842076"} Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.222246 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jqc8n" event={"ID":"06616d46-a0f2-4bd4-ae40-00c67b9bfb0e","Type":"ContainerDied","Data":"0b35a5d4877a5371fbee2adfe58dc8ec2e1c011c058d6a7f0d029fc2a6314ec0"} Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.222287 4797 scope.go:117] "RemoveContainer" containerID="be9dc9e0103bfdd5dbbb02b4c60ffddb7a5f40c39755da3777a3c6770f842076" Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.224555 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" event={"ID":"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58","Type":"ContainerStarted","Data":"7676882ea0fb00cc85f3b146e41b5b14964e2991f4a77e04c9e02f2824451d87"} Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.224765 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.251852 4797 scope.go:117] "RemoveContainer" containerID="706ace2b3da3c64a059ba35bf55c1a6796b13a0571d4ba22b3cf2b24540e349e" Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.254436 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" podStartSLOduration=3.254412158 podStartE2EDuration="3.254412158s" podCreationTimestamp="2026-02-16 11:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:24:25.252217909 +0000 UTC m=+1059.972402889" watchObservedRunningTime="2026-02-16 11:24:25.254412158 +0000 UTC m=+1059.974597138" Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.272041 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jqc8n"] Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.280185 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jqc8n"] Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.289689 4797 scope.go:117] "RemoveContainer" containerID="be9dc9e0103bfdd5dbbb02b4c60ffddb7a5f40c39755da3777a3c6770f842076" Feb 16 11:24:25 crc kubenswrapper[4797]: E0216 11:24:25.290071 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be9dc9e0103bfdd5dbbb02b4c60ffddb7a5f40c39755da3777a3c6770f842076\": container with ID starting with be9dc9e0103bfdd5dbbb02b4c60ffddb7a5f40c39755da3777a3c6770f842076 not found: ID does not exist" containerID="be9dc9e0103bfdd5dbbb02b4c60ffddb7a5f40c39755da3777a3c6770f842076" Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.290104 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9dc9e0103bfdd5dbbb02b4c60ffddb7a5f40c39755da3777a3c6770f842076"} err="failed to get container status \"be9dc9e0103bfdd5dbbb02b4c60ffddb7a5f40c39755da3777a3c6770f842076\": rpc error: code = NotFound desc = could not find container \"be9dc9e0103bfdd5dbbb02b4c60ffddb7a5f40c39755da3777a3c6770f842076\": container with ID starting with be9dc9e0103bfdd5dbbb02b4c60ffddb7a5f40c39755da3777a3c6770f842076 not found: ID does not exist" Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.290129 4797 scope.go:117] "RemoveContainer" containerID="706ace2b3da3c64a059ba35bf55c1a6796b13a0571d4ba22b3cf2b24540e349e" Feb 16 11:24:25 crc kubenswrapper[4797]: E0216 11:24:25.290351 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"706ace2b3da3c64a059ba35bf55c1a6796b13a0571d4ba22b3cf2b24540e349e\": container with ID starting with 706ace2b3da3c64a059ba35bf55c1a6796b13a0571d4ba22b3cf2b24540e349e not found: ID does not exist" containerID="706ace2b3da3c64a059ba35bf55c1a6796b13a0571d4ba22b3cf2b24540e349e" Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.290380 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"706ace2b3da3c64a059ba35bf55c1a6796b13a0571d4ba22b3cf2b24540e349e"} err="failed to get container status \"706ace2b3da3c64a059ba35bf55c1a6796b13a0571d4ba22b3cf2b24540e349e\": rpc error: code = NotFound desc = could not find container \"706ace2b3da3c64a059ba35bf55c1a6796b13a0571d4ba22b3cf2b24540e349e\": container with ID starting with 706ace2b3da3c64a059ba35bf55c1a6796b13a0571d4ba22b3cf2b24540e349e not found: ID does not exist" Feb 16 11:24:25 crc kubenswrapper[4797]: I0216 11:24:25.995139 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06616d46-a0f2-4bd4-ae40-00c67b9bfb0e" path="/var/lib/kubelet/pods/06616d46-a0f2-4bd4-ae40-00c67b9bfb0e/volumes" Feb 16 11:24:27 crc kubenswrapper[4797]: I0216 11:24:27.069850 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:29 crc kubenswrapper[4797]: I0216 11:24:29.260868 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"517059fd-92d8-4058-b426-5653912b7a41","Type":"ContainerStarted","Data":"5f4bc183093dfd6718945ea1726fa6506517551917e78c32d1da643fe5158c5b"} Feb 16 11:24:29 crc kubenswrapper[4797]: I0216 11:24:29.261611 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 11:24:29 crc kubenswrapper[4797]: I0216 11:24:29.297093 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.242178084 podStartE2EDuration="40.297064281s" podCreationTimestamp="2026-02-16 11:23:49 +0000 UTC" firstStartedPulling="2026-02-16 11:24:00.415959534 +0000 UTC m=+1035.136144514" lastFinishedPulling="2026-02-16 11:24:28.470845731 +0000 UTC m=+1063.191030711" observedRunningTime="2026-02-16 11:24:29.286076341 +0000 UTC m=+1064.006261321" watchObservedRunningTime="2026-02-16 11:24:29.297064281 +0000 UTC m=+1064.017249291" Feb 16 11:24:31 crc kubenswrapper[4797]: I0216 11:24:31.276476 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" event={"ID":"8f51ac14-22e0-4e95-901e-02cbad7ce1fe","Type":"ContainerStarted","Data":"b45d7b8ef3d6a6ef1d94fd1aae205e4dafe48662047bc084a822d08bd50f6c9b"} Feb 16 11:24:31 crc kubenswrapper[4797]: I0216 11:24:31.277148 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:31 crc kubenswrapper[4797]: I0216 11:24:31.278493 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ad8679cc-1167-4feb-a53a-49bded099628","Type":"ContainerStarted","Data":"18354a4837725650f06159189e32d46682ae19effb72fbae70edeff08fe98d2f"} Feb 16 11:24:31 crc kubenswrapper[4797]: I0216 11:24:31.301138 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" podStartSLOduration=-9223372007.553661 podStartE2EDuration="29.301114606s" podCreationTimestamp="2026-02-16 11:24:02 +0000 UTC" firstStartedPulling="2026-02-16 11:24:03.514604322 +0000 UTC m=+1038.234789302" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:24:31.295594376 +0000 UTC m=+1066.015779356" watchObservedRunningTime="2026-02-16 11:24:31.301114606 +0000 UTC m=+1066.021299626" Feb 16 11:24:32 crc kubenswrapper[4797]: I0216 11:24:32.287612 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"113930a6-db19-4e43-bd2b-75ef1d11c021","Type":"ContainerStarted","Data":"10c3892c9f010c9fb931f8a6dd0937caf2bc16fbe0e8a41de98eccbd07627fe5"} Feb 16 11:24:32 crc kubenswrapper[4797]: I0216 11:24:32.820739 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:32 crc kubenswrapper[4797]: I0216 11:24:32.901849 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-wtvrd"] Feb 16 11:24:32 crc kubenswrapper[4797]: I0216 11:24:32.902089 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" podUID="a24de11f-dc03-4dd2-9167-65577983742f" containerName="dnsmasq-dns" containerID="cri-o://8642067c5aec6cefa1269421c715d2e29ff63e54bb392ee77aa9960cc80e228d" gracePeriod=10 Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.298559 4797 generic.go:334] "Generic (PLEG): container finished" podID="a24de11f-dc03-4dd2-9167-65577983742f" containerID="8642067c5aec6cefa1269421c715d2e29ff63e54bb392ee77aa9960cc80e228d" exitCode=0 Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.298617 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" event={"ID":"a24de11f-dc03-4dd2-9167-65577983742f","Type":"ContainerDied","Data":"8642067c5aec6cefa1269421c715d2e29ff63e54bb392ee77aa9960cc80e228d"} Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.300165 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dht7z" event={"ID":"3114c460-eb74-48a9-bf0c-d32fe63a71be","Type":"ContainerStarted","Data":"8bfb4018082abe35a604cdeb82906b403419344206bebc5680a786ebacba7eef"} Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.300374 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-dht7z" Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.321565 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dht7z" podStartSLOduration=6.642520289 podStartE2EDuration="38.321542878s" podCreationTimestamp="2026-02-16 11:23:55 +0000 UTC" firstStartedPulling="2026-02-16 11:24:00.784727478 +0000 UTC m=+1035.504912448" lastFinishedPulling="2026-02-16 11:24:32.463750057 +0000 UTC m=+1067.183935037" observedRunningTime="2026-02-16 11:24:33.31649627 +0000 UTC m=+1068.036681250" watchObservedRunningTime="2026-02-16 11:24:33.321542878 +0000 UTC m=+1068.041727858" Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.333749 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.390193 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-ovsdbserver-nb\") pod \"a24de11f-dc03-4dd2-9167-65577983742f\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.390265 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-config\") pod \"a24de11f-dc03-4dd2-9167-65577983742f\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.390293 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-dns-svc\") pod \"a24de11f-dc03-4dd2-9167-65577983742f\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.390382 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqznj\" (UniqueName: \"kubernetes.io/projected/a24de11f-dc03-4dd2-9167-65577983742f-kube-api-access-fqznj\") pod \"a24de11f-dc03-4dd2-9167-65577983742f\" (UID: \"a24de11f-dc03-4dd2-9167-65577983742f\") " Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.396731 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a24de11f-dc03-4dd2-9167-65577983742f-kube-api-access-fqznj" (OuterVolumeSpecName: "kube-api-access-fqznj") pod "a24de11f-dc03-4dd2-9167-65577983742f" (UID: "a24de11f-dc03-4dd2-9167-65577983742f"). InnerVolumeSpecName "kube-api-access-fqznj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.435760 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a24de11f-dc03-4dd2-9167-65577983742f" (UID: "a24de11f-dc03-4dd2-9167-65577983742f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.436200 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a24de11f-dc03-4dd2-9167-65577983742f" (UID: "a24de11f-dc03-4dd2-9167-65577983742f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.447144 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-config" (OuterVolumeSpecName: "config") pod "a24de11f-dc03-4dd2-9167-65577983742f" (UID: "a24de11f-dc03-4dd2-9167-65577983742f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.492915 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.492950 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.492964 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqznj\" (UniqueName: \"kubernetes.io/projected/a24de11f-dc03-4dd2-9167-65577983742f-kube-api-access-fqznj\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:33 crc kubenswrapper[4797]: I0216 11:24:33.492981 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a24de11f-dc03-4dd2-9167-65577983742f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:34 crc kubenswrapper[4797]: I0216 11:24:34.311635 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" Feb 16 11:24:34 crc kubenswrapper[4797]: I0216 11:24:34.311654 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-wtvrd" event={"ID":"a24de11f-dc03-4dd2-9167-65577983742f","Type":"ContainerDied","Data":"3df887c4bcfaae892a1c45a89f9f4d11c8630cc307c99a700bcf234b804e93ed"} Feb 16 11:24:34 crc kubenswrapper[4797]: I0216 11:24:34.311737 4797 scope.go:117] "RemoveContainer" containerID="8642067c5aec6cefa1269421c715d2e29ff63e54bb392ee77aa9960cc80e228d" Feb 16 11:24:34 crc kubenswrapper[4797]: I0216 11:24:34.335385 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-wtvrd"] Feb 16 11:24:34 crc kubenswrapper[4797]: I0216 11:24:34.339470 4797 scope.go:117] "RemoveContainer" containerID="e0360f5d82c08f2ff6b33bb8edd9753846606e6951e8ed47e4d59b199b73510e" Feb 16 11:24:34 crc kubenswrapper[4797]: I0216 11:24:34.344183 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-wtvrd"] Feb 16 11:24:34 crc kubenswrapper[4797]: I0216 11:24:34.866829 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 11:24:35 crc kubenswrapper[4797]: I0216 11:24:35.319680 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"08b607dd-023c-4050-87d5-58f8f7f1714a","Type":"ContainerStarted","Data":"9e27169a1aedd0697d0ec7b4b334f34ee3a490393494afe3ea7b3c24b4c10f92"} Feb 16 11:24:35 crc kubenswrapper[4797]: I0216 11:24:35.321668 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4acd6dc5-d9e3-4a05-aed4-ecc80733f365","Type":"ContainerStarted","Data":"a5b6674f3113374806c9551adfa4baa67a84a0220e3349e66e8b613d93369bb3"} Feb 16 11:24:35 crc kubenswrapper[4797]: I0216 11:24:35.998260 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a24de11f-dc03-4dd2-9167-65577983742f" path="/var/lib/kubelet/pods/a24de11f-dc03-4dd2-9167-65577983742f/volumes" Feb 16 11:24:36 crc kubenswrapper[4797]: I0216 11:24:36.333665 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3bf0ec48-8b5b-4671-b213-f04c4e66ad9e","Type":"ContainerStarted","Data":"5ea066e469629e6f40df0aa67cd848bcc0b4039029f67a5fa597a8ee5e058de0"} Feb 16 11:24:36 crc kubenswrapper[4797]: I0216 11:24:36.333999 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 11:24:36 crc kubenswrapper[4797]: I0216 11:24:36.348833 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.748481422 podStartE2EDuration="45.348813529s" podCreationTimestamp="2026-02-16 11:23:51 +0000 UTC" firstStartedPulling="2026-02-16 11:24:00.789483578 +0000 UTC m=+1035.509668558" lastFinishedPulling="2026-02-16 11:24:35.389815675 +0000 UTC m=+1070.110000665" observedRunningTime="2026-02-16 11:24:36.346516886 +0000 UTC m=+1071.066701866" watchObservedRunningTime="2026-02-16 11:24:36.348813529 +0000 UTC m=+1071.068998509" Feb 16 11:24:38 crc kubenswrapper[4797]: I0216 11:24:38.351957 4797 generic.go:334] "Generic (PLEG): container finished" podID="ad8679cc-1167-4feb-a53a-49bded099628" containerID="18354a4837725650f06159189e32d46682ae19effb72fbae70edeff08fe98d2f" exitCode=0 Feb 16 11:24:38 crc kubenswrapper[4797]: I0216 11:24:38.352033 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ad8679cc-1167-4feb-a53a-49bded099628","Type":"ContainerDied","Data":"18354a4837725650f06159189e32d46682ae19effb72fbae70edeff08fe98d2f"} Feb 16 11:24:38 crc kubenswrapper[4797]: I0216 11:24:38.354631 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a782b16e-c29b-4d0c-ae20-23e2822d8e02","Type":"ContainerStarted","Data":"b07a52d6037076e68ef80f3ba293c644f463726fbed6cc6f7f1975616b3bef07"} Feb 16 11:24:38 crc kubenswrapper[4797]: I0216 11:24:38.357255 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"8e52214d-a751-4e7f-913e-064677d2fe1f","Type":"ContainerStarted","Data":"f50727fb6dc5e2e3a1558953c58c744e496bb3cdb01f9f6f5c037c1e91458e0d"} Feb 16 11:24:38 crc kubenswrapper[4797]: I0216 11:24:38.359251 4797 generic.go:334] "Generic (PLEG): container finished" podID="4acd6dc5-d9e3-4a05-aed4-ecc80733f365" containerID="a5b6674f3113374806c9551adfa4baa67a84a0220e3349e66e8b613d93369bb3" exitCode=0 Feb 16 11:24:38 crc kubenswrapper[4797]: I0216 11:24:38.359287 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4acd6dc5-d9e3-4a05-aed4-ecc80733f365","Type":"ContainerDied","Data":"a5b6674f3113374806c9551adfa4baa67a84a0220e3349e66e8b613d93369bb3"} Feb 16 11:24:38 crc kubenswrapper[4797]: I0216 11:24:38.443606 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.859933323 podStartE2EDuration="40.443565129s" podCreationTimestamp="2026-02-16 11:23:58 +0000 UTC" firstStartedPulling="2026-02-16 11:24:00.85074691 +0000 UTC m=+1035.570931890" lastFinishedPulling="2026-02-16 11:24:37.434378686 +0000 UTC m=+1072.154563696" observedRunningTime="2026-02-16 11:24:38.440489935 +0000 UTC m=+1073.160674925" watchObservedRunningTime="2026-02-16 11:24:38.443565129 +0000 UTC m=+1073.163750109" Feb 16 11:24:38 crc kubenswrapper[4797]: I0216 11:24:38.473159 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=6.859349888 podStartE2EDuration="42.473137946s" podCreationTimestamp="2026-02-16 11:23:56 +0000 UTC" firstStartedPulling="2026-02-16 11:24:01.819941271 +0000 UTC m=+1036.540126251" lastFinishedPulling="2026-02-16 11:24:37.433729319 +0000 UTC m=+1072.153914309" observedRunningTime="2026-02-16 11:24:38.465635261 +0000 UTC m=+1073.185820281" watchObservedRunningTime="2026-02-16 11:24:38.473137946 +0000 UTC m=+1073.193322926" Feb 16 11:24:38 crc kubenswrapper[4797]: I0216 11:24:38.652742 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 11:24:39 crc kubenswrapper[4797]: I0216 11:24:39.377038 4797 generic.go:334] "Generic (PLEG): container finished" podID="08b607dd-023c-4050-87d5-58f8f7f1714a" containerID="9e27169a1aedd0697d0ec7b4b334f34ee3a490393494afe3ea7b3c24b4c10f92" exitCode=0 Feb 16 11:24:39 crc kubenswrapper[4797]: I0216 11:24:39.377123 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"08b607dd-023c-4050-87d5-58f8f7f1714a","Type":"ContainerDied","Data":"9e27169a1aedd0697d0ec7b4b334f34ee3a490393494afe3ea7b3c24b4c10f92"} Feb 16 11:24:39 crc kubenswrapper[4797]: I0216 11:24:39.387662 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4acd6dc5-d9e3-4a05-aed4-ecc80733f365","Type":"ContainerStarted","Data":"53a0fa9814fb6280eee9d32ef9feafa86a79d83048e27a7bb11c5cb7bbe0e602"} Feb 16 11:24:39 crc kubenswrapper[4797]: I0216 11:24:39.392191 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 11:24:39 crc kubenswrapper[4797]: I0216 11:24:39.434265 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=17.337155694 podStartE2EDuration="51.434142934s" podCreationTimestamp="2026-02-16 11:23:48 +0000 UTC" firstStartedPulling="2026-02-16 11:24:00.373230518 +0000 UTC m=+1035.093415498" lastFinishedPulling="2026-02-16 11:24:34.470217758 +0000 UTC m=+1069.190402738" observedRunningTime="2026-02-16 11:24:39.427707309 +0000 UTC m=+1074.147892309" watchObservedRunningTime="2026-02-16 11:24:39.434142934 +0000 UTC m=+1074.154327924" Feb 16 11:24:39 crc kubenswrapper[4797]: I0216 11:24:39.536105 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 11:24:39 crc kubenswrapper[4797]: I0216 11:24:39.536177 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 11:24:39 crc kubenswrapper[4797]: I0216 11:24:39.652744 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 11:24:40 crc kubenswrapper[4797]: I0216 11:24:40.398664 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"08b607dd-023c-4050-87d5-58f8f7f1714a","Type":"ContainerStarted","Data":"4c9972a3be42dcdffc340a9ac551831b6538b590674ee184350d1c3b1524502f"} Feb 16 11:24:40 crc kubenswrapper[4797]: I0216 11:24:40.400604 4797 generic.go:334] "Generic (PLEG): container finished" podID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerID="10c3892c9f010c9fb931f8a6dd0937caf2bc16fbe0e8a41de98eccbd07627fe5" exitCode=0 Feb 16 11:24:40 crc kubenswrapper[4797]: I0216 11:24:40.400687 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"113930a6-db19-4e43-bd2b-75ef1d11c021","Type":"ContainerDied","Data":"10c3892c9f010c9fb931f8a6dd0937caf2bc16fbe0e8a41de98eccbd07627fe5"} Feb 16 11:24:40 crc kubenswrapper[4797]: I0216 11:24:40.430808 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371982.423988 podStartE2EDuration="54.430787885s" podCreationTimestamp="2026-02-16 11:23:46 +0000 UTC" firstStartedPulling="2026-02-16 11:23:59.901521273 +0000 UTC m=+1034.621706253" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:24:40.424823062 +0000 UTC m=+1075.145008042" watchObservedRunningTime="2026-02-16 11:24:40.430787885 +0000 UTC m=+1075.150972865" Feb 16 11:24:41 crc kubenswrapper[4797]: I0216 11:24:41.411199 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ad8679cc-1167-4feb-a53a-49bded099628","Type":"ContainerStarted","Data":"8c4ec92b0bc470b9e04a92d32a0541bfb73154c6d28fe9873fecf46515fb9a5e"} Feb 16 11:24:41 crc kubenswrapper[4797]: I0216 11:24:41.695837 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 11:24:41 crc kubenswrapper[4797]: I0216 11:24:41.977050 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-9kfc4"] Feb 16 11:24:41 crc kubenswrapper[4797]: E0216 11:24:41.977947 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a24de11f-dc03-4dd2-9167-65577983742f" containerName="dnsmasq-dns" Feb 16 11:24:41 crc kubenswrapper[4797]: I0216 11:24:41.977970 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="a24de11f-dc03-4dd2-9167-65577983742f" containerName="dnsmasq-dns" Feb 16 11:24:41 crc kubenswrapper[4797]: E0216 11:24:41.977995 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06616d46-a0f2-4bd4-ae40-00c67b9bfb0e" containerName="init" Feb 16 11:24:41 crc kubenswrapper[4797]: I0216 11:24:41.978004 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="06616d46-a0f2-4bd4-ae40-00c67b9bfb0e" containerName="init" Feb 16 11:24:41 crc kubenswrapper[4797]: E0216 11:24:41.978021 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06616d46-a0f2-4bd4-ae40-00c67b9bfb0e" containerName="dnsmasq-dns" Feb 16 11:24:41 crc kubenswrapper[4797]: I0216 11:24:41.978029 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="06616d46-a0f2-4bd4-ae40-00c67b9bfb0e" containerName="dnsmasq-dns" Feb 16 11:24:41 crc kubenswrapper[4797]: E0216 11:24:41.978047 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a24de11f-dc03-4dd2-9167-65577983742f" containerName="init" Feb 16 11:24:41 crc kubenswrapper[4797]: I0216 11:24:41.978055 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="a24de11f-dc03-4dd2-9167-65577983742f" containerName="init" Feb 16 11:24:41 crc kubenswrapper[4797]: I0216 11:24:41.978271 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="a24de11f-dc03-4dd2-9167-65577983742f" containerName="dnsmasq-dns" Feb 16 11:24:41 crc kubenswrapper[4797]: I0216 11:24:41.978295 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="06616d46-a0f2-4bd4-ae40-00c67b9bfb0e" containerName="dnsmasq-dns" Feb 16 11:24:41 crc kubenswrapper[4797]: I0216 11:24:41.979635 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.008405 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9kfc4"] Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.023126 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.060936 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.061029 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx2sw\" (UniqueName: \"kubernetes.io/projected/5984b22e-1ba0-4050-a595-28423d93bc33-kube-api-access-qx2sw\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.061081 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.061152 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-dns-svc\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.061614 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-config\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.163154 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-config\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.163213 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.163268 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx2sw\" (UniqueName: \"kubernetes.io/projected/5984b22e-1ba0-4050-a595-28423d93bc33-kube-api-access-qx2sw\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.163300 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.163350 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-dns-svc\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.164339 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-dns-svc\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.164934 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-config\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.165276 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.165996 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.209426 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx2sw\" (UniqueName: \"kubernetes.io/projected/5984b22e-1ba0-4050-a595-28423d93bc33-kube-api-access-qx2sw\") pod \"dnsmasq-dns-698758b865-9kfc4\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.322194 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.396655 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.470336 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.548089 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.794727 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9kfc4"] Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.853272 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-vzsll" Feb 16 11:24:42 crc kubenswrapper[4797]: I0216 11:24:42.904048 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.174506 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.180387 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.185175 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.185219 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-79gbp" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.185522 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.201371 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.214867 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.308755 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btlg7\" (UniqueName: \"kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-kube-api-access-btlg7\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.308872 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f443541-845c-4fdd-b6d1-08aba5c39667-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.308970 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.308994 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1129d489-a0c4-4746-af52-106ec173d316\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1129d489-a0c4-4746-af52-106ec173d316\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.309020 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9f443541-845c-4fdd-b6d1-08aba5c39667-lock\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.309054 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9f443541-845c-4fdd-b6d1-08aba5c39667-cache\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.411081 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f443541-845c-4fdd-b6d1-08aba5c39667-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.411398 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.411450 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1129d489-a0c4-4746-af52-106ec173d316\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1129d489-a0c4-4746-af52-106ec173d316\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.411469 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9f443541-845c-4fdd-b6d1-08aba5c39667-lock\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.411501 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9f443541-845c-4fdd-b6d1-08aba5c39667-cache\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.411546 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btlg7\" (UniqueName: \"kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-kube-api-access-btlg7\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: E0216 11:24:43.412104 4797 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 11:24:43 crc kubenswrapper[4797]: E0216 11:24:43.412146 4797 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 11:24:43 crc kubenswrapper[4797]: E0216 11:24:43.412219 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift podName:9f443541-845c-4fdd-b6d1-08aba5c39667 nodeName:}" failed. No retries permitted until 2026-02-16 11:24:43.912191514 +0000 UTC m=+1078.632376494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift") pod "swift-storage-0" (UID: "9f443541-845c-4fdd-b6d1-08aba5c39667") : configmap "swift-ring-files" not found Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.412250 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9f443541-845c-4fdd-b6d1-08aba5c39667-cache\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.412362 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9f443541-845c-4fdd-b6d1-08aba5c39667-lock\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.418338 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.418395 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1129d489-a0c4-4746-af52-106ec173d316\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1129d489-a0c4-4746-af52-106ec173d316\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2126a31be7b6d9b30a6891e41013595fada93c1877069d96d0ff99c54a0eb57f/globalmount\"" pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.438670 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9kfc4" event={"ID":"5984b22e-1ba0-4050-a595-28423d93bc33","Type":"ContainerStarted","Data":"fb4df4c3d33bc02ae4f586ca6aa3eb588bc18120d5857c804bad793274cf30a4"} Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.486135 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1129d489-a0c4-4746-af52-106ec173d316\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1129d489-a0c4-4746-af52-106ec173d316\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.580879 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btlg7\" (UniqueName: \"kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-kube-api-access-btlg7\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.584558 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f443541-845c-4fdd-b6d1-08aba5c39667-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.665678 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="4ab6c5d9-8717-4b1b-8d13-6eb03e52a080" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.666051 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.736065 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-jj9d5"] Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.738448 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.748843 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jj9d5"] Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.749632 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.749762 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.749859 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.761544 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.819504 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-swiftconf\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.820225 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-etc-swift\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.820296 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-combined-ca-bundle\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.820546 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-dispersionconf\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.820891 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-scripts\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.821135 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnvpf\" (UniqueName: \"kubernetes.io/projected/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-kube-api-access-bnvpf\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.821306 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-ring-data-devices\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.923101 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-swiftconf\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.923170 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-etc-swift\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.923201 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-combined-ca-bundle\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.923227 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-dispersionconf\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.923269 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-scripts\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.923307 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnvpf\" (UniqueName: \"kubernetes.io/projected/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-kube-api-access-bnvpf\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.923343 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.923371 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-ring-data-devices\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.924049 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-ring-data-devices\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: E0216 11:24:43.924147 4797 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 11:24:43 crc kubenswrapper[4797]: E0216 11:24:43.924166 4797 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 11:24:43 crc kubenswrapper[4797]: E0216 11:24:43.924203 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift podName:9f443541-845c-4fdd-b6d1-08aba5c39667 nodeName:}" failed. No retries permitted until 2026-02-16 11:24:44.924191297 +0000 UTC m=+1079.644376277 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift") pod "swift-storage-0" (UID: "9f443541-845c-4fdd-b6d1-08aba5c39667") : configmap "swift-ring-files" not found Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.924299 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-etc-swift\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.925157 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-scripts\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.932633 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-dispersionconf\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.933668 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-swiftconf\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.957570 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnvpf\" (UniqueName: \"kubernetes.io/projected/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-kube-api-access-bnvpf\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.957983 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-combined-ca-bundle\") pod \"swift-ring-rebalance-jj9d5\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:43 crc kubenswrapper[4797]: I0216 11:24:43.973168 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.013360 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.064097 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.447801 4797 generic.go:334] "Generic (PLEG): container finished" podID="5984b22e-1ba0-4050-a595-28423d93bc33" containerID="be6eaf0900da0384397bec37db1b3e17b142d9e55adfd705cbe70c5ee793ffde" exitCode=0 Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.448188 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9kfc4" event={"ID":"5984b22e-1ba0-4050-a595-28423d93bc33","Type":"ContainerDied","Data":"be6eaf0900da0384397bec37db1b3e17b142d9e55adfd705cbe70c5ee793ffde"} Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.458888 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ad8679cc-1167-4feb-a53a-49bded099628","Type":"ContainerStarted","Data":"4fb093f8a990519478d389ba229c22445998f11a1ed5d6f0742f773583cd4710"} Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.459138 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.461399 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.495415 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=12.311708787 podStartE2EDuration="52.495397047s" podCreationTimestamp="2026-02-16 11:23:52 +0000 UTC" firstStartedPulling="2026-02-16 11:24:00.335995371 +0000 UTC m=+1035.056180351" lastFinishedPulling="2026-02-16 11:24:40.519683631 +0000 UTC m=+1075.239868611" observedRunningTime="2026-02-16 11:24:44.488356554 +0000 UTC m=+1079.208541544" watchObservedRunningTime="2026-02-16 11:24:44.495397047 +0000 UTC m=+1079.215582027" Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.584726 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jj9d5"] Feb 16 11:24:44 crc kubenswrapper[4797]: W0216 11:24:44.597017 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b48cc2d_f411_40a8_81a8_e7fc66b9a30a.slice/crio-00bf7b6855b358de71185370388658f57ea4a3df13873ad6372ab85a67b2196b WatchSource:0}: Error finding container 00bf7b6855b358de71185370388658f57ea4a3df13873ad6372ab85a67b2196b: Status 404 returned error can't find the container with id 00bf7b6855b358de71185370388658f57ea4a3df13873ad6372ab85a67b2196b Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.706978 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.941827 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.943723 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.947165 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-dn559" Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.947384 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.947492 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.948728 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.948880 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 11:24:44 crc kubenswrapper[4797]: I0216 11:24:44.958732 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:44 crc kubenswrapper[4797]: E0216 11:24:44.958886 4797 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 11:24:44 crc kubenswrapper[4797]: E0216 11:24:44.958898 4797 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 11:24:44 crc kubenswrapper[4797]: E0216 11:24:44.958938 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift podName:9f443541-845c-4fdd-b6d1-08aba5c39667 nodeName:}" failed. No retries permitted until 2026-02-16 11:24:46.958922197 +0000 UTC m=+1081.679107177 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift") pod "swift-storage-0" (UID: "9f443541-845c-4fdd-b6d1-08aba5c39667") : configmap "swift-ring-files" not found Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.060408 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.060471 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-config\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.060503 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-667tl\" (UniqueName: \"kubernetes.io/projected/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-kube-api-access-667tl\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.060522 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.060542 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-scripts\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.060592 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.060761 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.162334 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.162399 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.162426 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-config\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.162457 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-667tl\" (UniqueName: \"kubernetes.io/projected/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-kube-api-access-667tl\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.162479 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.162494 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-scripts\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.162524 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.162978 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.166139 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-scripts\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.166160 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-config\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.168108 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.170991 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.174595 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.179689 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-667tl\" (UniqueName: \"kubernetes.io/projected/595e46ad-0edd-4cc1-b56d-e4aa4a1f1772-kube-api-access-667tl\") pod \"ovn-northd-0\" (UID: \"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772\") " pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.266762 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.477318 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jj9d5" event={"ID":"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a","Type":"ContainerStarted","Data":"00bf7b6855b358de71185370388658f57ea4a3df13873ad6372ab85a67b2196b"} Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.482056 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9kfc4" event={"ID":"5984b22e-1ba0-4050-a595-28423d93bc33","Type":"ContainerStarted","Data":"e93f7d344bac97a6c4a7957c3a3fc983901c80194483b2f7a840f663f2d50ccf"} Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.482174 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.510394 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-9kfc4" podStartSLOduration=4.510375087 podStartE2EDuration="4.510375087s" podCreationTimestamp="2026-02-16 11:24:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:24:45.503652204 +0000 UTC m=+1080.223837204" watchObservedRunningTime="2026-02-16 11:24:45.510375087 +0000 UTC m=+1080.230560067" Feb 16 11:24:45 crc kubenswrapper[4797]: I0216 11:24:45.811477 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 11:24:46 crc kubenswrapper[4797]: I0216 11:24:46.491134 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772","Type":"ContainerStarted","Data":"20a766c4564253ea33d7542f0f690492ef2990e8f74af3d2781d3624230ae6ed"} Feb 16 11:24:47 crc kubenswrapper[4797]: I0216 11:24:47.017175 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:47 crc kubenswrapper[4797]: E0216 11:24:47.018025 4797 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 11:24:47 crc kubenswrapper[4797]: E0216 11:24:47.018058 4797 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 11:24:47 crc kubenswrapper[4797]: E0216 11:24:47.018113 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift podName:9f443541-845c-4fdd-b6d1-08aba5c39667 nodeName:}" failed. No retries permitted until 2026-02-16 11:24:51.018092446 +0000 UTC m=+1085.738277486 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift") pod "swift-storage-0" (UID: "9f443541-845c-4fdd-b6d1-08aba5c39667") : configmap "swift-ring-files" not found Feb 16 11:24:47 crc kubenswrapper[4797]: I0216 11:24:47.978280 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 11:24:47 crc kubenswrapper[4797]: I0216 11:24:47.978323 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 11:24:48 crc kubenswrapper[4797]: I0216 11:24:48.063402 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 11:24:48 crc kubenswrapper[4797]: I0216 11:24:48.214960 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-l8dct"] Feb 16 11:24:48 crc kubenswrapper[4797]: I0216 11:24:48.218610 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l8dct" Feb 16 11:24:48 crc kubenswrapper[4797]: I0216 11:24:48.222165 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 11:24:48 crc kubenswrapper[4797]: I0216 11:24:48.240000 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l8dct"] Feb 16 11:24:48 crc kubenswrapper[4797]: I0216 11:24:48.240056 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqmnh\" (UniqueName: \"kubernetes.io/projected/8e1b26f2-2414-4971-b1f5-c69190057184-kube-api-access-mqmnh\") pod \"root-account-create-update-l8dct\" (UID: \"8e1b26f2-2414-4971-b1f5-c69190057184\") " pod="openstack/root-account-create-update-l8dct" Feb 16 11:24:48 crc kubenswrapper[4797]: I0216 11:24:48.240141 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e1b26f2-2414-4971-b1f5-c69190057184-operator-scripts\") pod \"root-account-create-update-l8dct\" (UID: \"8e1b26f2-2414-4971-b1f5-c69190057184\") " pod="openstack/root-account-create-update-l8dct" Feb 16 11:24:48 crc kubenswrapper[4797]: I0216 11:24:48.342476 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqmnh\" (UniqueName: \"kubernetes.io/projected/8e1b26f2-2414-4971-b1f5-c69190057184-kube-api-access-mqmnh\") pod \"root-account-create-update-l8dct\" (UID: \"8e1b26f2-2414-4971-b1f5-c69190057184\") " pod="openstack/root-account-create-update-l8dct" Feb 16 11:24:48 crc kubenswrapper[4797]: I0216 11:24:48.342571 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e1b26f2-2414-4971-b1f5-c69190057184-operator-scripts\") pod \"root-account-create-update-l8dct\" (UID: \"8e1b26f2-2414-4971-b1f5-c69190057184\") " pod="openstack/root-account-create-update-l8dct" Feb 16 11:24:48 crc kubenswrapper[4797]: I0216 11:24:48.343394 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e1b26f2-2414-4971-b1f5-c69190057184-operator-scripts\") pod \"root-account-create-update-l8dct\" (UID: \"8e1b26f2-2414-4971-b1f5-c69190057184\") " pod="openstack/root-account-create-update-l8dct" Feb 16 11:24:48 crc kubenswrapper[4797]: I0216 11:24:48.363270 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqmnh\" (UniqueName: \"kubernetes.io/projected/8e1b26f2-2414-4971-b1f5-c69190057184-kube-api-access-mqmnh\") pod \"root-account-create-update-l8dct\" (UID: \"8e1b26f2-2414-4971-b1f5-c69190057184\") " pod="openstack/root-account-create-update-l8dct" Feb 16 11:24:48 crc kubenswrapper[4797]: I0216 11:24:48.551992 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l8dct" Feb 16 11:24:48 crc kubenswrapper[4797]: I0216 11:24:48.618392 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 11:24:50 crc kubenswrapper[4797]: I0216 11:24:50.854286 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-spl6l"] Feb 16 11:24:50 crc kubenswrapper[4797]: I0216 11:24:50.856729 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-spl6l" Feb 16 11:24:50 crc kubenswrapper[4797]: I0216 11:24:50.868070 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-spl6l"] Feb 16 11:24:50 crc kubenswrapper[4797]: I0216 11:24:50.955765 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e09a-account-create-update-8scsc"] Feb 16 11:24:50 crc kubenswrapper[4797]: I0216 11:24:50.977207 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e09a-account-create-update-8scsc"] Feb 16 11:24:50 crc kubenswrapper[4797]: I0216 11:24:50.977261 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e09a-account-create-update-8scsc" Feb 16 11:24:50 crc kubenswrapper[4797]: I0216 11:24:50.980524 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 11:24:50 crc kubenswrapper[4797]: I0216 11:24:50.997256 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6491fced-b625-48df-a033-29cd854a45da-operator-scripts\") pod \"keystone-db-create-spl6l\" (UID: \"6491fced-b625-48df-a033-29cd854a45da\") " pod="openstack/keystone-db-create-spl6l" Feb 16 11:24:50 crc kubenswrapper[4797]: I0216 11:24:50.997341 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b75828bf-9dfb-4337-9ac4-710a7fbb62db-operator-scripts\") pod \"keystone-e09a-account-create-update-8scsc\" (UID: \"b75828bf-9dfb-4337-9ac4-710a7fbb62db\") " pod="openstack/keystone-e09a-account-create-update-8scsc" Feb 16 11:24:50 crc kubenswrapper[4797]: I0216 11:24:50.997516 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thtvl\" (UniqueName: \"kubernetes.io/projected/6491fced-b625-48df-a033-29cd854a45da-kube-api-access-thtvl\") pod \"keystone-db-create-spl6l\" (UID: \"6491fced-b625-48df-a033-29cd854a45da\") " pod="openstack/keystone-db-create-spl6l" Feb 16 11:24:50 crc kubenswrapper[4797]: I0216 11:24:50.997908 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdzmv\" (UniqueName: \"kubernetes.io/projected/b75828bf-9dfb-4337-9ac4-710a7fbb62db-kube-api-access-qdzmv\") pod \"keystone-e09a-account-create-update-8scsc\" (UID: \"b75828bf-9dfb-4337-9ac4-710a7fbb62db\") " pod="openstack/keystone-e09a-account-create-update-8scsc" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.048993 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-xfrtt"] Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.050140 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xfrtt" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.068623 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xfrtt"] Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.098947 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73dbbcd9-7ef8-4a40-ad11-d3f6de830711-operator-scripts\") pod \"placement-db-create-xfrtt\" (UID: \"73dbbcd9-7ef8-4a40-ad11-d3f6de830711\") " pod="openstack/placement-db-create-xfrtt" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.099045 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdzmv\" (UniqueName: \"kubernetes.io/projected/b75828bf-9dfb-4337-9ac4-710a7fbb62db-kube-api-access-qdzmv\") pod \"keystone-e09a-account-create-update-8scsc\" (UID: \"b75828bf-9dfb-4337-9ac4-710a7fbb62db\") " pod="openstack/keystone-e09a-account-create-update-8scsc" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.099109 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6491fced-b625-48df-a033-29cd854a45da-operator-scripts\") pod \"keystone-db-create-spl6l\" (UID: \"6491fced-b625-48df-a033-29cd854a45da\") " pod="openstack/keystone-db-create-spl6l" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.099151 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b75828bf-9dfb-4337-9ac4-710a7fbb62db-operator-scripts\") pod \"keystone-e09a-account-create-update-8scsc\" (UID: \"b75828bf-9dfb-4337-9ac4-710a7fbb62db\") " pod="openstack/keystone-e09a-account-create-update-8scsc" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.099209 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.099237 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkcz8\" (UniqueName: \"kubernetes.io/projected/73dbbcd9-7ef8-4a40-ad11-d3f6de830711-kube-api-access-xkcz8\") pod \"placement-db-create-xfrtt\" (UID: \"73dbbcd9-7ef8-4a40-ad11-d3f6de830711\") " pod="openstack/placement-db-create-xfrtt" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.099272 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thtvl\" (UniqueName: \"kubernetes.io/projected/6491fced-b625-48df-a033-29cd854a45da-kube-api-access-thtvl\") pod \"keystone-db-create-spl6l\" (UID: \"6491fced-b625-48df-a033-29cd854a45da\") " pod="openstack/keystone-db-create-spl6l" Feb 16 11:24:51 crc kubenswrapper[4797]: E0216 11:24:51.099468 4797 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 11:24:51 crc kubenswrapper[4797]: E0216 11:24:51.099510 4797 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 11:24:51 crc kubenswrapper[4797]: E0216 11:24:51.099571 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift podName:9f443541-845c-4fdd-b6d1-08aba5c39667 nodeName:}" failed. No retries permitted until 2026-02-16 11:24:59.099551218 +0000 UTC m=+1093.819736198 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift") pod "swift-storage-0" (UID: "9f443541-845c-4fdd-b6d1-08aba5c39667") : configmap "swift-ring-files" not found Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.099955 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b75828bf-9dfb-4337-9ac4-710a7fbb62db-operator-scripts\") pod \"keystone-e09a-account-create-update-8scsc\" (UID: \"b75828bf-9dfb-4337-9ac4-710a7fbb62db\") " pod="openstack/keystone-e09a-account-create-update-8scsc" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.100594 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6491fced-b625-48df-a033-29cd854a45da-operator-scripts\") pod \"keystone-db-create-spl6l\" (UID: \"6491fced-b625-48df-a033-29cd854a45da\") " pod="openstack/keystone-db-create-spl6l" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.118271 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdzmv\" (UniqueName: \"kubernetes.io/projected/b75828bf-9dfb-4337-9ac4-710a7fbb62db-kube-api-access-qdzmv\") pod \"keystone-e09a-account-create-update-8scsc\" (UID: \"b75828bf-9dfb-4337-9ac4-710a7fbb62db\") " pod="openstack/keystone-e09a-account-create-update-8scsc" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.121308 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thtvl\" (UniqueName: \"kubernetes.io/projected/6491fced-b625-48df-a033-29cd854a45da-kube-api-access-thtvl\") pod \"keystone-db-create-spl6l\" (UID: \"6491fced-b625-48df-a033-29cd854a45da\") " pod="openstack/keystone-db-create-spl6l" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.138947 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ebfd-account-create-update-b8xtr"] Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.140089 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ebfd-account-create-update-b8xtr" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.141638 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.157733 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ebfd-account-create-update-b8xtr"] Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.183740 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-spl6l" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.200663 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkcz8\" (UniqueName: \"kubernetes.io/projected/73dbbcd9-7ef8-4a40-ad11-d3f6de830711-kube-api-access-xkcz8\") pod \"placement-db-create-xfrtt\" (UID: \"73dbbcd9-7ef8-4a40-ad11-d3f6de830711\") " pod="openstack/placement-db-create-xfrtt" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.200749 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73dbbcd9-7ef8-4a40-ad11-d3f6de830711-operator-scripts\") pod \"placement-db-create-xfrtt\" (UID: \"73dbbcd9-7ef8-4a40-ad11-d3f6de830711\") " pod="openstack/placement-db-create-xfrtt" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.200792 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6blg\" (UniqueName: \"kubernetes.io/projected/979573d1-ce03-454a-9d96-94372635c0cd-kube-api-access-k6blg\") pod \"placement-ebfd-account-create-update-b8xtr\" (UID: \"979573d1-ce03-454a-9d96-94372635c0cd\") " pod="openstack/placement-ebfd-account-create-update-b8xtr" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.200871 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979573d1-ce03-454a-9d96-94372635c0cd-operator-scripts\") pod \"placement-ebfd-account-create-update-b8xtr\" (UID: \"979573d1-ce03-454a-9d96-94372635c0cd\") " pod="openstack/placement-ebfd-account-create-update-b8xtr" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.201486 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73dbbcd9-7ef8-4a40-ad11-d3f6de830711-operator-scripts\") pod \"placement-db-create-xfrtt\" (UID: \"73dbbcd9-7ef8-4a40-ad11-d3f6de830711\") " pod="openstack/placement-db-create-xfrtt" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.216543 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkcz8\" (UniqueName: \"kubernetes.io/projected/73dbbcd9-7ef8-4a40-ad11-d3f6de830711-kube-api-access-xkcz8\") pod \"placement-db-create-xfrtt\" (UID: \"73dbbcd9-7ef8-4a40-ad11-d3f6de830711\") " pod="openstack/placement-db-create-xfrtt" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.302252 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6blg\" (UniqueName: \"kubernetes.io/projected/979573d1-ce03-454a-9d96-94372635c0cd-kube-api-access-k6blg\") pod \"placement-ebfd-account-create-update-b8xtr\" (UID: \"979573d1-ce03-454a-9d96-94372635c0cd\") " pod="openstack/placement-ebfd-account-create-update-b8xtr" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.302366 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979573d1-ce03-454a-9d96-94372635c0cd-operator-scripts\") pod \"placement-ebfd-account-create-update-b8xtr\" (UID: \"979573d1-ce03-454a-9d96-94372635c0cd\") " pod="openstack/placement-ebfd-account-create-update-b8xtr" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.303134 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979573d1-ce03-454a-9d96-94372635c0cd-operator-scripts\") pod \"placement-ebfd-account-create-update-b8xtr\" (UID: \"979573d1-ce03-454a-9d96-94372635c0cd\") " pod="openstack/placement-ebfd-account-create-update-b8xtr" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.304788 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e09a-account-create-update-8scsc" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.319168 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6blg\" (UniqueName: \"kubernetes.io/projected/979573d1-ce03-454a-9d96-94372635c0cd-kube-api-access-k6blg\") pod \"placement-ebfd-account-create-update-b8xtr\" (UID: \"979573d1-ce03-454a-9d96-94372635c0cd\") " pod="openstack/placement-ebfd-account-create-update-b8xtr" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.365892 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xfrtt" Feb 16 11:24:51 crc kubenswrapper[4797]: I0216 11:24:51.500061 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ebfd-account-create-update-b8xtr" Feb 16 11:24:52 crc kubenswrapper[4797]: I0216 11:24:52.323737 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:24:52 crc kubenswrapper[4797]: I0216 11:24:52.384179 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fwg2v"] Feb 16 11:24:52 crc kubenswrapper[4797]: I0216 11:24:52.384776 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" podUID="9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" containerName="dnsmasq-dns" containerID="cri-o://7676882ea0fb00cc85f3b146e41b5b14964e2991f4a77e04c9e02f2824451d87" gracePeriod=10 Feb 16 11:24:52 crc kubenswrapper[4797]: I0216 11:24:52.512022 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-cnfpr" Feb 16 11:24:52 crc kubenswrapper[4797]: I0216 11:24:52.516769 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-spl6l"] Feb 16 11:24:52 crc kubenswrapper[4797]: W0216 11:24:52.567450 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6491fced_b625_48df_a033_29cd854a45da.slice/crio-ad005628bf5e236ce67e556d9b7db3d7c2624f4f6c5ab1258bee98c02265d1de WatchSource:0}: Error finding container ad005628bf5e236ce67e556d9b7db3d7c2624f4f6c5ab1258bee98c02265d1de: Status 404 returned error can't find the container with id ad005628bf5e236ce67e556d9b7db3d7c2624f4f6c5ab1258bee98c02265d1de Feb 16 11:24:52 crc kubenswrapper[4797]: I0216 11:24:52.575865 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"113930a6-db19-4e43-bd2b-75ef1d11c021","Type":"ContainerStarted","Data":"16415f3ace1f92241ac1bc115d0fd48d6634facace715c2436c85e569e2a7a89"} Feb 16 11:24:52 crc kubenswrapper[4797]: I0216 11:24:52.602717 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772","Type":"ContainerStarted","Data":"fdaacee27ff51fd62beb70f2681fd3ba0d5591da8993318797d1ad731aa8dac7"} Feb 16 11:24:52 crc kubenswrapper[4797]: I0216 11:24:52.628387 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jj9d5" event={"ID":"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a","Type":"ContainerStarted","Data":"403f1f33da741a9ee0525981a22398321b280ff65e90789546ea9ebefe93541f"} Feb 16 11:24:52 crc kubenswrapper[4797]: I0216 11:24:52.655000 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-jj9d5" podStartSLOduration=2.2195281590000002 podStartE2EDuration="9.654984219s" podCreationTimestamp="2026-02-16 11:24:43 +0000 UTC" firstStartedPulling="2026-02-16 11:24:44.601809821 +0000 UTC m=+1079.321994791" lastFinishedPulling="2026-02-16 11:24:52.037265871 +0000 UTC m=+1086.757450851" observedRunningTime="2026-02-16 11:24:52.654834625 +0000 UTC m=+1087.375019615" watchObservedRunningTime="2026-02-16 11:24:52.654984219 +0000 UTC m=+1087.375169199" Feb 16 11:24:52 crc kubenswrapper[4797]: I0216 11:24:52.676787 4797 generic.go:334] "Generic (PLEG): container finished" podID="9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" containerID="7676882ea0fb00cc85f3b146e41b5b14964e2991f4a77e04c9e02f2824451d87" exitCode=0 Feb 16 11:24:52 crc kubenswrapper[4797]: I0216 11:24:52.676863 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" event={"ID":"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58","Type":"ContainerDied","Data":"7676882ea0fb00cc85f3b146e41b5b14964e2991f4a77e04c9e02f2824451d87"} Feb 16 11:24:52 crc kubenswrapper[4797]: W0216 11:24:52.871320 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e1b26f2_2414_4971_b1f5_c69190057184.slice/crio-14618e55952735d1bab355353a54334efe81b483d9bf92e120c8f7caa8e89f79 WatchSource:0}: Error finding container 14618e55952735d1bab355353a54334efe81b483d9bf92e120c8f7caa8e89f79: Status 404 returned error can't find the container with id 14618e55952735d1bab355353a54334efe81b483d9bf92e120c8f7caa8e89f79 Feb 16 11:24:52 crc kubenswrapper[4797]: I0216 11:24:52.871395 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ebfd-account-create-update-b8xtr"] Feb 16 11:24:52 crc kubenswrapper[4797]: I0216 11:24:52.883036 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-l8dct"] Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.027408 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-xfrtt"] Feb 16 11:24:53 crc kubenswrapper[4797]: W0216 11:24:53.030972 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73dbbcd9_7ef8_4a40_ad11_d3f6de830711.slice/crio-68d0e63fc5de488a772fd0a45fa68e8770c6cbee2dbc4859907a29812e80ae9d WatchSource:0}: Error finding container 68d0e63fc5de488a772fd0a45fa68e8770c6cbee2dbc4859907a29812e80ae9d: Status 404 returned error can't find the container with id 68d0e63fc5de488a772fd0a45fa68e8770c6cbee2dbc4859907a29812e80ae9d Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.031001 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:53 crc kubenswrapper[4797]: W0216 11:24:53.050179 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb75828bf_9dfb_4337_9ac4_710a7fbb62db.slice/crio-6345c8ce71aae9677b8a5435f861cef482d1d4fb337468313392534c8b812272 WatchSource:0}: Error finding container 6345c8ce71aae9677b8a5435f861cef482d1d4fb337468313392534c8b812272: Status 404 returned error can't find the container with id 6345c8ce71aae9677b8a5435f861cef482d1d4fb337468313392534c8b812272 Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.054700 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e09a-account-create-update-8scsc"] Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.154673 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4nxc\" (UniqueName: \"kubernetes.io/projected/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-kube-api-access-h4nxc\") pod \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.154953 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-dns-svc\") pod \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.154982 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-config\") pod \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.155652 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-ovsdbserver-nb\") pod \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.155747 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-ovsdbserver-sb\") pod \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\" (UID: \"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58\") " Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.161928 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-kube-api-access-h4nxc" (OuterVolumeSpecName: "kube-api-access-h4nxc") pod "9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" (UID: "9d02a6e0-fd01-49a4-80c1-3aa581fd0f58"). InnerVolumeSpecName "kube-api-access-h4nxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.207078 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-config" (OuterVolumeSpecName: "config") pod "9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" (UID: "9d02a6e0-fd01-49a4-80c1-3aa581fd0f58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.209427 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" (UID: "9d02a6e0-fd01-49a4-80c1-3aa581fd0f58"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.214471 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" (UID: "9d02a6e0-fd01-49a4-80c1-3aa581fd0f58"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.236754 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" (UID: "9d02a6e0-fd01-49a4-80c1-3aa581fd0f58"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.260109 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.260152 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4nxc\" (UniqueName: \"kubernetes.io/projected/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-kube-api-access-h4nxc\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.260164 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.260177 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.260188 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.667704 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="4ab6c5d9-8717-4b1b-8d13-6eb03e52a080" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.688676 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" event={"ID":"9d02a6e0-fd01-49a4-80c1-3aa581fd0f58","Type":"ContainerDied","Data":"bd3f6f1d58fb6e5d09e414c6af58cf028ad79ae299bb754a4bd17e9a0f9d7a2c"} Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.688732 4797 scope.go:117] "RemoveContainer" containerID="7676882ea0fb00cc85f3b146e41b5b14964e2991f4a77e04c9e02f2824451d87" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.688884 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.691310 4797 generic.go:334] "Generic (PLEG): container finished" podID="8e1b26f2-2414-4971-b1f5-c69190057184" containerID="aa8addfefb65f4141009154f91dea0df9fec4e6c78cb2bb340e0de3bf13759c3" exitCode=0 Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.691383 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l8dct" event={"ID":"8e1b26f2-2414-4971-b1f5-c69190057184","Type":"ContainerDied","Data":"aa8addfefb65f4141009154f91dea0df9fec4e6c78cb2bb340e0de3bf13759c3"} Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.691414 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l8dct" event={"ID":"8e1b26f2-2414-4971-b1f5-c69190057184","Type":"ContainerStarted","Data":"14618e55952735d1bab355353a54334efe81b483d9bf92e120c8f7caa8e89f79"} Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.697739 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e09a-account-create-update-8scsc" event={"ID":"b75828bf-9dfb-4337-9ac4-710a7fbb62db","Type":"ContainerStarted","Data":"e669249122d87a91f8b4e07b07fecea9be861741461be519295aaf4e0b45ab21"} Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.697778 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e09a-account-create-update-8scsc" event={"ID":"b75828bf-9dfb-4337-9ac4-710a7fbb62db","Type":"ContainerStarted","Data":"6345c8ce71aae9677b8a5435f861cef482d1d4fb337468313392534c8b812272"} Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.700822 4797 generic.go:334] "Generic (PLEG): container finished" podID="979573d1-ce03-454a-9d96-94372635c0cd" containerID="a9e03302729a7df71a82c9c25e74f00a943ebdf097fd45694a73f40f411024eb" exitCode=0 Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.700974 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ebfd-account-create-update-b8xtr" event={"ID":"979573d1-ce03-454a-9d96-94372635c0cd","Type":"ContainerDied","Data":"a9e03302729a7df71a82c9c25e74f00a943ebdf097fd45694a73f40f411024eb"} Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.701014 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ebfd-account-create-update-b8xtr" event={"ID":"979573d1-ce03-454a-9d96-94372635c0cd","Type":"ContainerStarted","Data":"76394f70963aabd83681a177f46ab0c6ea08040552ace19acaffa13432f769f8"} Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.706652 4797 scope.go:117] "RemoveContainer" containerID="963826670d47dcea583e19b2e8854f4d5613bf7b3abc1d12fb522cd7d76df6d7" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.706697 4797 generic.go:334] "Generic (PLEG): container finished" podID="6491fced-b625-48df-a033-29cd854a45da" containerID="f59edb93b2aa7f23639dfca5bcf7c71513ed5f21c66d1b63e52079dfd59f9399" exitCode=0 Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.706774 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-spl6l" event={"ID":"6491fced-b625-48df-a033-29cd854a45da","Type":"ContainerDied","Data":"f59edb93b2aa7f23639dfca5bcf7c71513ed5f21c66d1b63e52079dfd59f9399"} Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.706805 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-spl6l" event={"ID":"6491fced-b625-48df-a033-29cd854a45da","Type":"ContainerStarted","Data":"ad005628bf5e236ce67e556d9b7db3d7c2624f4f6c5ab1258bee98c02265d1de"} Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.709663 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xfrtt" event={"ID":"73dbbcd9-7ef8-4a40-ad11-d3f6de830711","Type":"ContainerStarted","Data":"d30e1b2abb360f4a19e1cab67d7af73ee4d0bab6554dc10f666948c08e7a7978"} Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.709701 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xfrtt" event={"ID":"73dbbcd9-7ef8-4a40-ad11-d3f6de830711","Type":"ContainerStarted","Data":"68d0e63fc5de488a772fd0a45fa68e8770c6cbee2dbc4859907a29812e80ae9d"} Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.722467 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"595e46ad-0edd-4cc1-b56d-e4aa4a1f1772","Type":"ContainerStarted","Data":"12442d8abb6bd8a6f2b964edfa4b62f6a2fd356d1e5b234fdb8b11dc43d77358"} Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.722534 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.748744 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-e09a-account-create-update-8scsc" podStartSLOduration=3.74872526 podStartE2EDuration="3.74872526s" podCreationTimestamp="2026-02-16 11:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:24:53.72561586 +0000 UTC m=+1088.445800850" watchObservedRunningTime="2026-02-16 11:24:53.74872526 +0000 UTC m=+1088.468910240" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.771364 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fwg2v"] Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.795622 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-fwg2v"] Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.825813 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-xfrtt" podStartSLOduration=2.825799493 podStartE2EDuration="2.825799493s" podCreationTimestamp="2026-02-16 11:24:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:24:53.78717024 +0000 UTC m=+1088.507355220" watchObservedRunningTime="2026-02-16 11:24:53.825799493 +0000 UTC m=+1088.545984473" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.836166 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.5505305480000002 podStartE2EDuration="9.836148486s" podCreationTimestamp="2026-02-16 11:24:44 +0000 UTC" firstStartedPulling="2026-02-16 11:24:45.816604265 +0000 UTC m=+1080.536789245" lastFinishedPulling="2026-02-16 11:24:52.102222203 +0000 UTC m=+1086.822407183" observedRunningTime="2026-02-16 11:24:53.811508113 +0000 UTC m=+1088.531693093" watchObservedRunningTime="2026-02-16 11:24:53.836148486 +0000 UTC m=+1088.556333466" Feb 16 11:24:53 crc kubenswrapper[4797]: I0216 11:24:53.992749 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" path="/var/lib/kubelet/pods/9d02a6e0-fd01-49a4-80c1-3aa581fd0f58/volumes" Feb 16 11:24:54 crc kubenswrapper[4797]: I0216 11:24:54.733161 4797 generic.go:334] "Generic (PLEG): container finished" podID="73dbbcd9-7ef8-4a40-ad11-d3f6de830711" containerID="d30e1b2abb360f4a19e1cab67d7af73ee4d0bab6554dc10f666948c08e7a7978" exitCode=0 Feb 16 11:24:54 crc kubenswrapper[4797]: I0216 11:24:54.733330 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xfrtt" event={"ID":"73dbbcd9-7ef8-4a40-ad11-d3f6de830711","Type":"ContainerDied","Data":"d30e1b2abb360f4a19e1cab67d7af73ee4d0bab6554dc10f666948c08e7a7978"} Feb 16 11:24:54 crc kubenswrapper[4797]: I0216 11:24:54.736648 4797 generic.go:334] "Generic (PLEG): container finished" podID="1aa87d44-dc52-4398-a8f5-0adf7d33966e" containerID="f9f043d6115c5196475dc2af25329d9344140e068d8d356839f140959f075ccc" exitCode=0 Feb 16 11:24:54 crc kubenswrapper[4797]: I0216 11:24:54.736736 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1aa87d44-dc52-4398-a8f5-0adf7d33966e","Type":"ContainerDied","Data":"f9f043d6115c5196475dc2af25329d9344140e068d8d356839f140959f075ccc"} Feb 16 11:24:54 crc kubenswrapper[4797]: I0216 11:24:54.743966 4797 generic.go:334] "Generic (PLEG): container finished" podID="b75828bf-9dfb-4337-9ac4-710a7fbb62db" containerID="e669249122d87a91f8b4e07b07fecea9be861741461be519295aaf4e0b45ab21" exitCode=0 Feb 16 11:24:54 crc kubenswrapper[4797]: I0216 11:24:54.744153 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e09a-account-create-update-8scsc" event={"ID":"b75828bf-9dfb-4337-9ac4-710a7fbb62db","Type":"ContainerDied","Data":"e669249122d87a91f8b4e07b07fecea9be861741461be519295aaf4e0b45ab21"} Feb 16 11:24:54 crc kubenswrapper[4797]: I0216 11:24:54.747277 4797 generic.go:334] "Generic (PLEG): container finished" podID="40b82cbf-8ce3-45e9-a87e-a96cbe83488c" containerID="b30f5794a1569a833b29a4e2003c86c7568e0f5b3acb482d44119bb5e723c6ae" exitCode=0 Feb 16 11:24:54 crc kubenswrapper[4797]: I0216 11:24:54.747615 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"40b82cbf-8ce3-45e9-a87e-a96cbe83488c","Type":"ContainerDied","Data":"b30f5794a1569a833b29a4e2003c86c7568e0f5b3acb482d44119bb5e723c6ae"} Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.041667 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-j46lb"] Feb 16 11:24:55 crc kubenswrapper[4797]: E0216 11:24:55.042443 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" containerName="init" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.042458 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" containerName="init" Feb 16 11:24:55 crc kubenswrapper[4797]: E0216 11:24:55.042474 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" containerName="dnsmasq-dns" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.042485 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" containerName="dnsmasq-dns" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.042717 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" containerName="dnsmasq-dns" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.043628 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j46lb" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.058733 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-j46lb"] Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.111625 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncmkx\" (UniqueName: \"kubernetes.io/projected/2435c436-da01-4acc-a193-7f1337ece1ef-kube-api-access-ncmkx\") pod \"glance-db-create-j46lb\" (UID: \"2435c436-da01-4acc-a193-7f1337ece1ef\") " pod="openstack/glance-db-create-j46lb" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.111800 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2435c436-da01-4acc-a193-7f1337ece1ef-operator-scripts\") pod \"glance-db-create-j46lb\" (UID: \"2435c436-da01-4acc-a193-7f1337ece1ef\") " pod="openstack/glance-db-create-j46lb" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.168658 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a443-account-create-update-dqvgt"] Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.170170 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a443-account-create-update-dqvgt" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.174124 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.185227 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a443-account-create-update-dqvgt"] Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.214093 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncmkx\" (UniqueName: \"kubernetes.io/projected/2435c436-da01-4acc-a193-7f1337ece1ef-kube-api-access-ncmkx\") pod \"glance-db-create-j46lb\" (UID: \"2435c436-da01-4acc-a193-7f1337ece1ef\") " pod="openstack/glance-db-create-j46lb" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.214327 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2435c436-da01-4acc-a193-7f1337ece1ef-operator-scripts\") pod \"glance-db-create-j46lb\" (UID: \"2435c436-da01-4acc-a193-7f1337ece1ef\") " pod="openstack/glance-db-create-j46lb" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.215378 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2435c436-da01-4acc-a193-7f1337ece1ef-operator-scripts\") pod \"glance-db-create-j46lb\" (UID: \"2435c436-da01-4acc-a193-7f1337ece1ef\") " pod="openstack/glance-db-create-j46lb" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.241185 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncmkx\" (UniqueName: \"kubernetes.io/projected/2435c436-da01-4acc-a193-7f1337ece1ef-kube-api-access-ncmkx\") pod \"glance-db-create-j46lb\" (UID: \"2435c436-da01-4acc-a193-7f1337ece1ef\") " pod="openstack/glance-db-create-j46lb" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.315995 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5n78\" (UniqueName: \"kubernetes.io/projected/67122d2a-58c7-48a5-893b-1ad4382838eb-kube-api-access-c5n78\") pod \"glance-a443-account-create-update-dqvgt\" (UID: \"67122d2a-58c7-48a5-893b-1ad4382838eb\") " pod="openstack/glance-a443-account-create-update-dqvgt" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.316044 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67122d2a-58c7-48a5-893b-1ad4382838eb-operator-scripts\") pod \"glance-a443-account-create-update-dqvgt\" (UID: \"67122d2a-58c7-48a5-893b-1ad4382838eb\") " pod="openstack/glance-a443-account-create-update-dqvgt" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.417607 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5n78\" (UniqueName: \"kubernetes.io/projected/67122d2a-58c7-48a5-893b-1ad4382838eb-kube-api-access-c5n78\") pod \"glance-a443-account-create-update-dqvgt\" (UID: \"67122d2a-58c7-48a5-893b-1ad4382838eb\") " pod="openstack/glance-a443-account-create-update-dqvgt" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.417704 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67122d2a-58c7-48a5-893b-1ad4382838eb-operator-scripts\") pod \"glance-a443-account-create-update-dqvgt\" (UID: \"67122d2a-58c7-48a5-893b-1ad4382838eb\") " pod="openstack/glance-a443-account-create-update-dqvgt" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.419011 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67122d2a-58c7-48a5-893b-1ad4382838eb-operator-scripts\") pod \"glance-a443-account-create-update-dqvgt\" (UID: \"67122d2a-58c7-48a5-893b-1ad4382838eb\") " pod="openstack/glance-a443-account-create-update-dqvgt" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.450370 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5n78\" (UniqueName: \"kubernetes.io/projected/67122d2a-58c7-48a5-893b-1ad4382838eb-kube-api-access-c5n78\") pod \"glance-a443-account-create-update-dqvgt\" (UID: \"67122d2a-58c7-48a5-893b-1ad4382838eb\") " pod="openstack/glance-a443-account-create-update-dqvgt" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.566039 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j46lb" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.598897 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a443-account-create-update-dqvgt" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.605506 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-spl6l" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.615806 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l8dct" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.623559 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ebfd-account-create-update-b8xtr" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.722336 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979573d1-ce03-454a-9d96-94372635c0cd-operator-scripts\") pod \"979573d1-ce03-454a-9d96-94372635c0cd\" (UID: \"979573d1-ce03-454a-9d96-94372635c0cd\") " Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.722485 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqmnh\" (UniqueName: \"kubernetes.io/projected/8e1b26f2-2414-4971-b1f5-c69190057184-kube-api-access-mqmnh\") pod \"8e1b26f2-2414-4971-b1f5-c69190057184\" (UID: \"8e1b26f2-2414-4971-b1f5-c69190057184\") " Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.722621 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e1b26f2-2414-4971-b1f5-c69190057184-operator-scripts\") pod \"8e1b26f2-2414-4971-b1f5-c69190057184\" (UID: \"8e1b26f2-2414-4971-b1f5-c69190057184\") " Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.722665 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6blg\" (UniqueName: \"kubernetes.io/projected/979573d1-ce03-454a-9d96-94372635c0cd-kube-api-access-k6blg\") pod \"979573d1-ce03-454a-9d96-94372635c0cd\" (UID: \"979573d1-ce03-454a-9d96-94372635c0cd\") " Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.722713 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thtvl\" (UniqueName: \"kubernetes.io/projected/6491fced-b625-48df-a033-29cd854a45da-kube-api-access-thtvl\") pod \"6491fced-b625-48df-a033-29cd854a45da\" (UID: \"6491fced-b625-48df-a033-29cd854a45da\") " Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.722797 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6491fced-b625-48df-a033-29cd854a45da-operator-scripts\") pod \"6491fced-b625-48df-a033-29cd854a45da\" (UID: \"6491fced-b625-48df-a033-29cd854a45da\") " Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.723158 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/979573d1-ce03-454a-9d96-94372635c0cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "979573d1-ce03-454a-9d96-94372635c0cd" (UID: "979573d1-ce03-454a-9d96-94372635c0cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.723611 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6491fced-b625-48df-a033-29cd854a45da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6491fced-b625-48df-a033-29cd854a45da" (UID: "6491fced-b625-48df-a033-29cd854a45da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.723971 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6491fced-b625-48df-a033-29cd854a45da-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.723995 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979573d1-ce03-454a-9d96-94372635c0cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.723617 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e1b26f2-2414-4971-b1f5-c69190057184-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e1b26f2-2414-4971-b1f5-c69190057184" (UID: "8e1b26f2-2414-4971-b1f5-c69190057184"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.728077 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/979573d1-ce03-454a-9d96-94372635c0cd-kube-api-access-k6blg" (OuterVolumeSpecName: "kube-api-access-k6blg") pod "979573d1-ce03-454a-9d96-94372635c0cd" (UID: "979573d1-ce03-454a-9d96-94372635c0cd"). InnerVolumeSpecName "kube-api-access-k6blg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.731870 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6491fced-b625-48df-a033-29cd854a45da-kube-api-access-thtvl" (OuterVolumeSpecName: "kube-api-access-thtvl") pod "6491fced-b625-48df-a033-29cd854a45da" (UID: "6491fced-b625-48df-a033-29cd854a45da"). InnerVolumeSpecName "kube-api-access-thtvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.736441 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e1b26f2-2414-4971-b1f5-c69190057184-kube-api-access-mqmnh" (OuterVolumeSpecName: "kube-api-access-mqmnh") pod "8e1b26f2-2414-4971-b1f5-c69190057184" (UID: "8e1b26f2-2414-4971-b1f5-c69190057184"). InnerVolumeSpecName "kube-api-access-mqmnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.810208 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-l8dct" event={"ID":"8e1b26f2-2414-4971-b1f5-c69190057184","Type":"ContainerDied","Data":"14618e55952735d1bab355353a54334efe81b483d9bf92e120c8f7caa8e89f79"} Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.810479 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14618e55952735d1bab355353a54334efe81b483d9bf92e120c8f7caa8e89f79" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.810534 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-l8dct" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.817609 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ebfd-account-create-update-b8xtr" event={"ID":"979573d1-ce03-454a-9d96-94372635c0cd","Type":"ContainerDied","Data":"76394f70963aabd83681a177f46ab0c6ea08040552ace19acaffa13432f769f8"} Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.817641 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76394f70963aabd83681a177f46ab0c6ea08040552ace19acaffa13432f769f8" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.817721 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ebfd-account-create-update-b8xtr" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.824038 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-spl6l" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.824064 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-spl6l" event={"ID":"6491fced-b625-48df-a033-29cd854a45da","Type":"ContainerDied","Data":"ad005628bf5e236ce67e556d9b7db3d7c2624f4f6c5ab1258bee98c02265d1de"} Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.824131 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad005628bf5e236ce67e556d9b7db3d7c2624f4f6c5ab1258bee98c02265d1de" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.825772 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqmnh\" (UniqueName: \"kubernetes.io/projected/8e1b26f2-2414-4971-b1f5-c69190057184-kube-api-access-mqmnh\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.825788 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e1b26f2-2414-4971-b1f5-c69190057184-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.825798 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6blg\" (UniqueName: \"kubernetes.io/projected/979573d1-ce03-454a-9d96-94372635c0cd-kube-api-access-k6blg\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.825806 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thtvl\" (UniqueName: \"kubernetes.io/projected/6491fced-b625-48df-a033-29cd854a45da-kube-api-access-thtvl\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.830547 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:24:55 crc kubenswrapper[4797]: I0216 11:24:55.841722 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-zgw2f" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.070755 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-j46lb"] Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.089667 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-dht7z-config-5ck9h"] Feb 16 11:24:56 crc kubenswrapper[4797]: E0216 11:24:56.090148 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979573d1-ce03-454a-9d96-94372635c0cd" containerName="mariadb-account-create-update" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.090172 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="979573d1-ce03-454a-9d96-94372635c0cd" containerName="mariadb-account-create-update" Feb 16 11:24:56 crc kubenswrapper[4797]: E0216 11:24:56.090183 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6491fced-b625-48df-a033-29cd854a45da" containerName="mariadb-database-create" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.090191 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="6491fced-b625-48df-a033-29cd854a45da" containerName="mariadb-database-create" Feb 16 11:24:56 crc kubenswrapper[4797]: E0216 11:24:56.090210 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e1b26f2-2414-4971-b1f5-c69190057184" containerName="mariadb-account-create-update" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.090218 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e1b26f2-2414-4971-b1f5-c69190057184" containerName="mariadb-account-create-update" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.090448 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e1b26f2-2414-4971-b1f5-c69190057184" containerName="mariadb-account-create-update" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.090466 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="979573d1-ce03-454a-9d96-94372635c0cd" containerName="mariadb-account-create-update" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.090480 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="6491fced-b625-48df-a033-29cd854a45da" containerName="mariadb-database-create" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.091283 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.096838 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.105129 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dht7z-config-5ck9h"] Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.236157 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-run-ovn\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.236524 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4394ae-cae1-4db5-89a9-85912b8d08e5-additional-scripts\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.236620 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-log-ovn\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.236660 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-run\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.236684 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d4394ae-cae1-4db5-89a9-85912b8d08e5-scripts\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.236801 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldtxn\" (UniqueName: \"kubernetes.io/projected/5d4394ae-cae1-4db5-89a9-85912b8d08e5-kube-api-access-ldtxn\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.338968 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-run-ovn\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.339022 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4394ae-cae1-4db5-89a9-85912b8d08e5-additional-scripts\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.339083 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-log-ovn\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.339111 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-run\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.339134 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d4394ae-cae1-4db5-89a9-85912b8d08e5-scripts\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.339236 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldtxn\" (UniqueName: \"kubernetes.io/projected/5d4394ae-cae1-4db5-89a9-85912b8d08e5-kube-api-access-ldtxn\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.339644 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-log-ovn\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.339700 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-run\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.339722 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-run-ovn\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.340482 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4394ae-cae1-4db5-89a9-85912b8d08e5-additional-scripts\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.342718 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d4394ae-cae1-4db5-89a9-85912b8d08e5-scripts\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.369008 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldtxn\" (UniqueName: \"kubernetes.io/projected/5d4394ae-cae1-4db5-89a9-85912b8d08e5-kube-api-access-ldtxn\") pod \"ovn-controller-dht7z-config-5ck9h\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.404455 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a443-account-create-update-dqvgt"] Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.449536 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xfrtt" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.457279 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e09a-account-create-update-8scsc" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.544337 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdzmv\" (UniqueName: \"kubernetes.io/projected/b75828bf-9dfb-4337-9ac4-710a7fbb62db-kube-api-access-qdzmv\") pod \"b75828bf-9dfb-4337-9ac4-710a7fbb62db\" (UID: \"b75828bf-9dfb-4337-9ac4-710a7fbb62db\") " Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.544460 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b75828bf-9dfb-4337-9ac4-710a7fbb62db-operator-scripts\") pod \"b75828bf-9dfb-4337-9ac4-710a7fbb62db\" (UID: \"b75828bf-9dfb-4337-9ac4-710a7fbb62db\") " Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.544634 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73dbbcd9-7ef8-4a40-ad11-d3f6de830711-operator-scripts\") pod \"73dbbcd9-7ef8-4a40-ad11-d3f6de830711\" (UID: \"73dbbcd9-7ef8-4a40-ad11-d3f6de830711\") " Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.544678 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkcz8\" (UniqueName: \"kubernetes.io/projected/73dbbcd9-7ef8-4a40-ad11-d3f6de830711-kube-api-access-xkcz8\") pod \"73dbbcd9-7ef8-4a40-ad11-d3f6de830711\" (UID: \"73dbbcd9-7ef8-4a40-ad11-d3f6de830711\") " Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.549963 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73dbbcd9-7ef8-4a40-ad11-d3f6de830711-kube-api-access-xkcz8" (OuterVolumeSpecName: "kube-api-access-xkcz8") pod "73dbbcd9-7ef8-4a40-ad11-d3f6de830711" (UID: "73dbbcd9-7ef8-4a40-ad11-d3f6de830711"). InnerVolumeSpecName "kube-api-access-xkcz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.552859 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b75828bf-9dfb-4337-9ac4-710a7fbb62db-kube-api-access-qdzmv" (OuterVolumeSpecName: "kube-api-access-qdzmv") pod "b75828bf-9dfb-4337-9ac4-710a7fbb62db" (UID: "b75828bf-9dfb-4337-9ac4-710a7fbb62db"). InnerVolumeSpecName "kube-api-access-qdzmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.553245 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73dbbcd9-7ef8-4a40-ad11-d3f6de830711-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73dbbcd9-7ef8-4a40-ad11-d3f6de830711" (UID: "73dbbcd9-7ef8-4a40-ad11-d3f6de830711"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.553279 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b75828bf-9dfb-4337-9ac4-710a7fbb62db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b75828bf-9dfb-4337-9ac4-710a7fbb62db" (UID: "b75828bf-9dfb-4337-9ac4-710a7fbb62db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.553668 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.646621 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73dbbcd9-7ef8-4a40-ad11-d3f6de830711-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.646906 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkcz8\" (UniqueName: \"kubernetes.io/projected/73dbbcd9-7ef8-4a40-ad11-d3f6de830711-kube-api-access-xkcz8\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.646920 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdzmv\" (UniqueName: \"kubernetes.io/projected/b75828bf-9dfb-4337-9ac4-710a7fbb62db-kube-api-access-qdzmv\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.646929 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b75828bf-9dfb-4337-9ac4-710a7fbb62db-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.833612 4797 generic.go:334] "Generic (PLEG): container finished" podID="2435c436-da01-4acc-a193-7f1337ece1ef" containerID="6821a9a9bde0fb4eb1048144df3a82a0ddfb570e9b0d71eb7dd970f5c9b7be8c" exitCode=0 Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.833670 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-j46lb" event={"ID":"2435c436-da01-4acc-a193-7f1337ece1ef","Type":"ContainerDied","Data":"6821a9a9bde0fb4eb1048144df3a82a0ddfb570e9b0d71eb7dd970f5c9b7be8c"} Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.833694 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-j46lb" event={"ID":"2435c436-da01-4acc-a193-7f1337ece1ef","Type":"ContainerStarted","Data":"20af4750e9ca94d45f8696ffa83e7e59ef066580b7fbd65b1e1cf59f92fd914c"} Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.836745 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"40b82cbf-8ce3-45e9-a87e-a96cbe83488c","Type":"ContainerStarted","Data":"f98209213a7de4015a0645ef720f2c717b0d267364ea2d1ad97cead84b3f5174"} Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.837527 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.838889 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-xfrtt" event={"ID":"73dbbcd9-7ef8-4a40-ad11-d3f6de830711","Type":"ContainerDied","Data":"68d0e63fc5de488a772fd0a45fa68e8770c6cbee2dbc4859907a29812e80ae9d"} Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.838911 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-xfrtt" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.838916 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68d0e63fc5de488a772fd0a45fa68e8770c6cbee2dbc4859907a29812e80ae9d" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.846394 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"1aa87d44-dc52-4398-a8f5-0adf7d33966e","Type":"ContainerStarted","Data":"792fe2e94ca8fdfa263c7f67ac7566e6645de91d1db412889bb9ca69fdad6fda"} Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.847115 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.851505 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a443-account-create-update-dqvgt" event={"ID":"67122d2a-58c7-48a5-893b-1ad4382838eb","Type":"ContainerStarted","Data":"1e5ca7f38eaad9b767f59004d6270f9592320de7f66cc7456315450815de9b19"} Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.851539 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a443-account-create-update-dqvgt" event={"ID":"67122d2a-58c7-48a5-893b-1ad4382838eb","Type":"ContainerStarted","Data":"acd279dcf4d8a55083f936235ec51f0dbfc86f13397f355a5ef23914b2483813"} Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.861900 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e09a-account-create-update-8scsc" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.861892 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e09a-account-create-update-8scsc" event={"ID":"b75828bf-9dfb-4337-9ac4-710a7fbb62db","Type":"ContainerDied","Data":"6345c8ce71aae9677b8a5435f861cef482d1d4fb337468313392534c8b812272"} Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.862136 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6345c8ce71aae9677b8a5435f861cef482d1d4fb337468313392534c8b812272" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.864873 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"113930a6-db19-4e43-bd2b-75ef1d11c021","Type":"ContainerStarted","Data":"ba83bf3ef96f3074fa46a9c5ce77e7912ec1fa6b24485db7cb8cec10de2f8696"} Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.883762 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.330448118 podStartE2EDuration="1m11.883742872s" podCreationTimestamp="2026-02-16 11:23:45 +0000 UTC" firstStartedPulling="2026-02-16 11:24:00.053163562 +0000 UTC m=+1034.773348542" lastFinishedPulling="2026-02-16 11:24:20.606458306 +0000 UTC m=+1055.326643296" observedRunningTime="2026-02-16 11:24:56.874542401 +0000 UTC m=+1091.594727381" watchObservedRunningTime="2026-02-16 11:24:56.883742872 +0000 UTC m=+1091.603927852" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.907447 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.659049748 podStartE2EDuration="1m11.907433168s" podCreationTimestamp="2026-02-16 11:23:45 +0000 UTC" firstStartedPulling="2026-02-16 11:24:00.073155047 +0000 UTC m=+1034.793340017" lastFinishedPulling="2026-02-16 11:24:19.321538457 +0000 UTC m=+1054.041723437" observedRunningTime="2026-02-16 11:24:56.905086524 +0000 UTC m=+1091.625271504" watchObservedRunningTime="2026-02-16 11:24:56.907433168 +0000 UTC m=+1091.627618148" Feb 16 11:24:56 crc kubenswrapper[4797]: I0216 11:24:56.927961 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-a443-account-create-update-dqvgt" podStartSLOduration=1.927943318 podStartE2EDuration="1.927943318s" podCreationTimestamp="2026-02-16 11:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:24:56.925804209 +0000 UTC m=+1091.645989189" watchObservedRunningTime="2026-02-16 11:24:56.927943318 +0000 UTC m=+1091.648128298" Feb 16 11:24:57 crc kubenswrapper[4797]: I0216 11:24:57.067818 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-dht7z-config-5ck9h"] Feb 16 11:24:57 crc kubenswrapper[4797]: I0216 11:24:57.821844 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-fwg2v" podUID="9d02a6e0-fd01-49a4-80c1-3aa581fd0f58" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Feb 16 11:24:57 crc kubenswrapper[4797]: I0216 11:24:57.874267 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dht7z-config-5ck9h" event={"ID":"5d4394ae-cae1-4db5-89a9-85912b8d08e5","Type":"ContainerStarted","Data":"7c17efbf5337bfe587e030f97c9abb8c4eb0225078a45264183c4d629bc8d0e8"} Feb 16 11:24:57 crc kubenswrapper[4797]: I0216 11:24:57.874325 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dht7z-config-5ck9h" event={"ID":"5d4394ae-cae1-4db5-89a9-85912b8d08e5","Type":"ContainerStarted","Data":"90a91cafcd9dd9b9d9936555c3b164696add6189e9b7052c471dd71085034f0f"} Feb 16 11:24:57 crc kubenswrapper[4797]: I0216 11:24:57.878756 4797 generic.go:334] "Generic (PLEG): container finished" podID="67122d2a-58c7-48a5-893b-1ad4382838eb" containerID="1e5ca7f38eaad9b767f59004d6270f9592320de7f66cc7456315450815de9b19" exitCode=0 Feb 16 11:24:57 crc kubenswrapper[4797]: I0216 11:24:57.878802 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a443-account-create-update-dqvgt" event={"ID":"67122d2a-58c7-48a5-893b-1ad4382838eb","Type":"ContainerDied","Data":"1e5ca7f38eaad9b767f59004d6270f9592320de7f66cc7456315450815de9b19"} Feb 16 11:24:57 crc kubenswrapper[4797]: I0216 11:24:57.896452 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-dht7z-config-5ck9h" podStartSLOduration=1.8964341500000002 podStartE2EDuration="1.89643415s" podCreationTimestamp="2026-02-16 11:24:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:24:57.894877728 +0000 UTC m=+1092.615062708" watchObservedRunningTime="2026-02-16 11:24:57.89643415 +0000 UTC m=+1092.616619130" Feb 16 11:24:58 crc kubenswrapper[4797]: I0216 11:24:58.267714 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j46lb" Feb 16 11:24:58 crc kubenswrapper[4797]: I0216 11:24:58.284377 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2435c436-da01-4acc-a193-7f1337ece1ef-operator-scripts\") pod \"2435c436-da01-4acc-a193-7f1337ece1ef\" (UID: \"2435c436-da01-4acc-a193-7f1337ece1ef\") " Feb 16 11:24:58 crc kubenswrapper[4797]: I0216 11:24:58.284439 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncmkx\" (UniqueName: \"kubernetes.io/projected/2435c436-da01-4acc-a193-7f1337ece1ef-kube-api-access-ncmkx\") pod \"2435c436-da01-4acc-a193-7f1337ece1ef\" (UID: \"2435c436-da01-4acc-a193-7f1337ece1ef\") " Feb 16 11:24:58 crc kubenswrapper[4797]: I0216 11:24:58.285055 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2435c436-da01-4acc-a193-7f1337ece1ef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2435c436-da01-4acc-a193-7f1337ece1ef" (UID: "2435c436-da01-4acc-a193-7f1337ece1ef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:58 crc kubenswrapper[4797]: I0216 11:24:58.285366 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2435c436-da01-4acc-a193-7f1337ece1ef-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:58 crc kubenswrapper[4797]: I0216 11:24:58.312090 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2435c436-da01-4acc-a193-7f1337ece1ef-kube-api-access-ncmkx" (OuterVolumeSpecName: "kube-api-access-ncmkx") pod "2435c436-da01-4acc-a193-7f1337ece1ef" (UID: "2435c436-da01-4acc-a193-7f1337ece1ef"). InnerVolumeSpecName "kube-api-access-ncmkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:24:58 crc kubenswrapper[4797]: I0216 11:24:58.386840 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncmkx\" (UniqueName: \"kubernetes.io/projected/2435c436-da01-4acc-a193-7f1337ece1ef-kube-api-access-ncmkx\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:58 crc kubenswrapper[4797]: I0216 11:24:58.889571 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-j46lb" event={"ID":"2435c436-da01-4acc-a193-7f1337ece1ef","Type":"ContainerDied","Data":"20af4750e9ca94d45f8696ffa83e7e59ef066580b7fbd65b1e1cf59f92fd914c"} Feb 16 11:24:58 crc kubenswrapper[4797]: I0216 11:24:58.889629 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20af4750e9ca94d45f8696ffa83e7e59ef066580b7fbd65b1e1cf59f92fd914c" Feb 16 11:24:58 crc kubenswrapper[4797]: I0216 11:24:58.889688 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-j46lb" Feb 16 11:24:58 crc kubenswrapper[4797]: I0216 11:24:58.899360 4797 generic.go:334] "Generic (PLEG): container finished" podID="5d4394ae-cae1-4db5-89a9-85912b8d08e5" containerID="7c17efbf5337bfe587e030f97c9abb8c4eb0225078a45264183c4d629bc8d0e8" exitCode=0 Feb 16 11:24:58 crc kubenswrapper[4797]: I0216 11:24:58.899594 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dht7z-config-5ck9h" event={"ID":"5d4394ae-cae1-4db5-89a9-85912b8d08e5","Type":"ContainerDied","Data":"7c17efbf5337bfe587e030f97c9abb8c4eb0225078a45264183c4d629bc8d0e8"} Feb 16 11:24:59 crc kubenswrapper[4797]: I0216 11:24:59.100064 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:24:59 crc kubenswrapper[4797]: E0216 11:24:59.100323 4797 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 11:24:59 crc kubenswrapper[4797]: E0216 11:24:59.100354 4797 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 11:24:59 crc kubenswrapper[4797]: E0216 11:24:59.100422 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift podName:9f443541-845c-4fdd-b6d1-08aba5c39667 nodeName:}" failed. No retries permitted until 2026-02-16 11:25:15.100399819 +0000 UTC m=+1109.820584799 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift") pod "swift-storage-0" (UID: "9f443541-845c-4fdd-b6d1-08aba5c39667") : configmap "swift-ring-files" not found Feb 16 11:24:59 crc kubenswrapper[4797]: I0216 11:24:59.515845 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a443-account-create-update-dqvgt" Feb 16 11:24:59 crc kubenswrapper[4797]: I0216 11:24:59.615333 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67122d2a-58c7-48a5-893b-1ad4382838eb-operator-scripts\") pod \"67122d2a-58c7-48a5-893b-1ad4382838eb\" (UID: \"67122d2a-58c7-48a5-893b-1ad4382838eb\") " Feb 16 11:24:59 crc kubenswrapper[4797]: I0216 11:24:59.615541 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5n78\" (UniqueName: \"kubernetes.io/projected/67122d2a-58c7-48a5-893b-1ad4382838eb-kube-api-access-c5n78\") pod \"67122d2a-58c7-48a5-893b-1ad4382838eb\" (UID: \"67122d2a-58c7-48a5-893b-1ad4382838eb\") " Feb 16 11:24:59 crc kubenswrapper[4797]: I0216 11:24:59.615971 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67122d2a-58c7-48a5-893b-1ad4382838eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67122d2a-58c7-48a5-893b-1ad4382838eb" (UID: "67122d2a-58c7-48a5-893b-1ad4382838eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:24:59 crc kubenswrapper[4797]: I0216 11:24:59.620068 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67122d2a-58c7-48a5-893b-1ad4382838eb-kube-api-access-c5n78" (OuterVolumeSpecName: "kube-api-access-c5n78") pod "67122d2a-58c7-48a5-893b-1ad4382838eb" (UID: "67122d2a-58c7-48a5-893b-1ad4382838eb"). InnerVolumeSpecName "kube-api-access-c5n78". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:24:59 crc kubenswrapper[4797]: I0216 11:24:59.717157 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5n78\" (UniqueName: \"kubernetes.io/projected/67122d2a-58c7-48a5-893b-1ad4382838eb-kube-api-access-c5n78\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:59 crc kubenswrapper[4797]: I0216 11:24:59.717190 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67122d2a-58c7-48a5-893b-1ad4382838eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:24:59 crc kubenswrapper[4797]: I0216 11:24:59.916363 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a443-account-create-update-dqvgt" Feb 16 11:24:59 crc kubenswrapper[4797]: I0216 11:24:59.916367 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a443-account-create-update-dqvgt" event={"ID":"67122d2a-58c7-48a5-893b-1ad4382838eb","Type":"ContainerDied","Data":"acd279dcf4d8a55083f936235ec51f0dbfc86f13397f355a5ef23914b2483813"} Feb 16 11:24:59 crc kubenswrapper[4797]: I0216 11:24:59.916510 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acd279dcf4d8a55083f936235ec51f0dbfc86f13397f355a5ef23914b2483813" Feb 16 11:24:59 crc kubenswrapper[4797]: I0216 11:24:59.920676 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"113930a6-db19-4e43-bd2b-75ef1d11c021","Type":"ContainerStarted","Data":"02c91ff603dd2fbbb6758814df1561fcaeef0c774f6185d2cf42c6200762db7a"} Feb 16 11:24:59 crc kubenswrapper[4797]: I0216 11:24:59.957991 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=9.669897583000001 podStartE2EDuration="1m8.957968973s" podCreationTimestamp="2026-02-16 11:23:51 +0000 UTC" firstStartedPulling="2026-02-16 11:24:00.221230629 +0000 UTC m=+1034.941415609" lastFinishedPulling="2026-02-16 11:24:59.509302019 +0000 UTC m=+1094.229486999" observedRunningTime="2026-02-16 11:24:59.947891419 +0000 UTC m=+1094.668076479" watchObservedRunningTime="2026-02-16 11:24:59.957968973 +0000 UTC m=+1094.678153963" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.214894 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.227253 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-run-ovn\") pod \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.227303 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-run\") pod \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.227404 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5d4394ae-cae1-4db5-89a9-85912b8d08e5" (UID: "5d4394ae-cae1-4db5-89a9-85912b8d08e5"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.227514 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-run" (OuterVolumeSpecName: "var-run") pod "5d4394ae-cae1-4db5-89a9-85912b8d08e5" (UID: "5d4394ae-cae1-4db5-89a9-85912b8d08e5"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.227534 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldtxn\" (UniqueName: \"kubernetes.io/projected/5d4394ae-cae1-4db5-89a9-85912b8d08e5-kube-api-access-ldtxn\") pod \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.227569 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4394ae-cae1-4db5-89a9-85912b8d08e5-additional-scripts\") pod \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.227632 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d4394ae-cae1-4db5-89a9-85912b8d08e5-scripts\") pod \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.227663 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-log-ovn\") pod \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\" (UID: \"5d4394ae-cae1-4db5-89a9-85912b8d08e5\") " Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.228164 4797 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.228188 4797 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.228215 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5d4394ae-cae1-4db5-89a9-85912b8d08e5" (UID: "5d4394ae-cae1-4db5-89a9-85912b8d08e5"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.229295 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4394ae-cae1-4db5-89a9-85912b8d08e5-scripts" (OuterVolumeSpecName: "scripts") pod "5d4394ae-cae1-4db5-89a9-85912b8d08e5" (UID: "5d4394ae-cae1-4db5-89a9-85912b8d08e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.230144 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d4394ae-cae1-4db5-89a9-85912b8d08e5-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "5d4394ae-cae1-4db5-89a9-85912b8d08e5" (UID: "5d4394ae-cae1-4db5-89a9-85912b8d08e5"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.232126 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d4394ae-cae1-4db5-89a9-85912b8d08e5-kube-api-access-ldtxn" (OuterVolumeSpecName: "kube-api-access-ldtxn") pod "5d4394ae-cae1-4db5-89a9-85912b8d08e5" (UID: "5d4394ae-cae1-4db5-89a9-85912b8d08e5"). InnerVolumeSpecName "kube-api-access-ldtxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.330040 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldtxn\" (UniqueName: \"kubernetes.io/projected/5d4394ae-cae1-4db5-89a9-85912b8d08e5-kube-api-access-ldtxn\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.330080 4797 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5d4394ae-cae1-4db5-89a9-85912b8d08e5-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.330091 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5d4394ae-cae1-4db5-89a9-85912b8d08e5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.330099 4797 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5d4394ae-cae1-4db5-89a9-85912b8d08e5-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.929065 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-dht7z-config-5ck9h" event={"ID":"5d4394ae-cae1-4db5-89a9-85912b8d08e5","Type":"ContainerDied","Data":"90a91cafcd9dd9b9d9936555c3b164696add6189e9b7052c471dd71085034f0f"} Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.929102 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-dht7z-config-5ck9h" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.929107 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90a91cafcd9dd9b9d9936555c3b164696add6189e9b7052c471dd71085034f0f" Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.930689 4797 generic.go:334] "Generic (PLEG): container finished" podID="7b48cc2d-f411-40a8-81a8-e7fc66b9a30a" containerID="403f1f33da741a9ee0525981a22398321b280ff65e90789546ea9ebefe93541f" exitCode=0 Feb 16 11:25:00 crc kubenswrapper[4797]: I0216 11:25:00.930726 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jj9d5" event={"ID":"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a","Type":"ContainerDied","Data":"403f1f33da741a9ee0525981a22398321b280ff65e90789546ea9ebefe93541f"} Feb 16 11:25:01 crc kubenswrapper[4797]: I0216 11:25:01.327293 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-dht7z-config-5ck9h"] Feb 16 11:25:01 crc kubenswrapper[4797]: I0216 11:25:01.335754 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-dht7z-config-5ck9h"] Feb 16 11:25:01 crc kubenswrapper[4797]: I0216 11:25:01.628801 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-l8dct"] Feb 16 11:25:01 crc kubenswrapper[4797]: I0216 11:25:01.637942 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-l8dct"] Feb 16 11:25:01 crc kubenswrapper[4797]: I0216 11:25:01.995992 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d4394ae-cae1-4db5-89a9-85912b8d08e5" path="/var/lib/kubelet/pods/5d4394ae-cae1-4db5-89a9-85912b8d08e5/volumes" Feb 16 11:25:01 crc kubenswrapper[4797]: I0216 11:25:01.997122 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e1b26f2-2414-4971-b1f5-c69190057184" path="/var/lib/kubelet/pods/8e1b26f2-2414-4971-b1f5-c69190057184/volumes" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.328808 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.372309 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-combined-ca-bundle\") pod \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.372430 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-ring-data-devices\") pod \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.372481 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-dispersionconf\") pod \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.372520 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-scripts\") pod \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.372557 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-etc-swift\") pod \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.372622 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnvpf\" (UniqueName: \"kubernetes.io/projected/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-kube-api-access-bnvpf\") pod \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.372724 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-swiftconf\") pod \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\" (UID: \"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a\") " Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.373352 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a" (UID: "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.373615 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a" (UID: "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.378441 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-kube-api-access-bnvpf" (OuterVolumeSpecName: "kube-api-access-bnvpf") pod "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a" (UID: "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a"). InnerVolumeSpecName "kube-api-access-bnvpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.380488 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a" (UID: "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.398917 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-scripts" (OuterVolumeSpecName: "scripts") pod "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a" (UID: "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.414693 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a" (UID: "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.420209 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a" (UID: "7b48cc2d-f411-40a8-81a8-e7fc66b9a30a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.474772 4797 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.474831 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.474848 4797 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.474859 4797 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.474872 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.474882 4797 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.474893 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnvpf\" (UniqueName: \"kubernetes.io/projected/7b48cc2d-f411-40a8-81a8-e7fc66b9a30a-kube-api-access-bnvpf\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.952142 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jj9d5" event={"ID":"7b48cc2d-f411-40a8-81a8-e7fc66b9a30a","Type":"ContainerDied","Data":"00bf7b6855b358de71185370388658f57ea4a3df13873ad6372ab85a67b2196b"} Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.952222 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00bf7b6855b358de71185370388658f57ea4a3df13873ad6372ab85a67b2196b" Feb 16 11:25:02 crc kubenswrapper[4797]: I0216 11:25:02.952317 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jj9d5" Feb 16 11:25:03 crc kubenswrapper[4797]: I0216 11:25:03.313353 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:03 crc kubenswrapper[4797]: I0216 11:25:03.662753 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="4ab6c5d9-8717-4b1b-8d13-6eb03e52a080" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.363531 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.372866 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-zwkg2"] Feb 16 11:25:05 crc kubenswrapper[4797]: E0216 11:25:05.373309 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b75828bf-9dfb-4337-9ac4-710a7fbb62db" containerName="mariadb-account-create-update" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.373330 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="b75828bf-9dfb-4337-9ac4-710a7fbb62db" containerName="mariadb-account-create-update" Feb 16 11:25:05 crc kubenswrapper[4797]: E0216 11:25:05.373342 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73dbbcd9-7ef8-4a40-ad11-d3f6de830711" containerName="mariadb-database-create" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.373350 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="73dbbcd9-7ef8-4a40-ad11-d3f6de830711" containerName="mariadb-database-create" Feb 16 11:25:05 crc kubenswrapper[4797]: E0216 11:25:05.373363 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2435c436-da01-4acc-a193-7f1337ece1ef" containerName="mariadb-database-create" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.373371 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="2435c436-da01-4acc-a193-7f1337ece1ef" containerName="mariadb-database-create" Feb 16 11:25:05 crc kubenswrapper[4797]: E0216 11:25:05.373381 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67122d2a-58c7-48a5-893b-1ad4382838eb" containerName="mariadb-account-create-update" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.373389 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="67122d2a-58c7-48a5-893b-1ad4382838eb" containerName="mariadb-account-create-update" Feb 16 11:25:05 crc kubenswrapper[4797]: E0216 11:25:05.373408 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d4394ae-cae1-4db5-89a9-85912b8d08e5" containerName="ovn-config" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.373416 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d4394ae-cae1-4db5-89a9-85912b8d08e5" containerName="ovn-config" Feb 16 11:25:05 crc kubenswrapper[4797]: E0216 11:25:05.373437 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b48cc2d-f411-40a8-81a8-e7fc66b9a30a" containerName="swift-ring-rebalance" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.373444 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b48cc2d-f411-40a8-81a8-e7fc66b9a30a" containerName="swift-ring-rebalance" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.373654 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="2435c436-da01-4acc-a193-7f1337ece1ef" containerName="mariadb-database-create" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.373670 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="67122d2a-58c7-48a5-893b-1ad4382838eb" containerName="mariadb-account-create-update" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.373684 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b48cc2d-f411-40a8-81a8-e7fc66b9a30a" containerName="swift-ring-rebalance" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.373697 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="73dbbcd9-7ef8-4a40-ad11-d3f6de830711" containerName="mariadb-database-create" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.373710 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="b75828bf-9dfb-4337-9ac4-710a7fbb62db" containerName="mariadb-account-create-update" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.373721 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d4394ae-cae1-4db5-89a9-85912b8d08e5" containerName="ovn-config" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.376378 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.384758 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.384973 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2cxxr" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.426133 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zwkg2"] Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.431348 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfwrf\" (UniqueName: \"kubernetes.io/projected/448a4a0f-a469-415f-8dcc-6223ee884c29-kube-api-access-vfwrf\") pod \"glance-db-sync-zwkg2\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.431391 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-db-sync-config-data\") pod \"glance-db-sync-zwkg2\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.431506 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-combined-ca-bundle\") pod \"glance-db-sync-zwkg2\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.431559 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-config-data\") pod \"glance-db-sync-zwkg2\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.533073 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-combined-ca-bundle\") pod \"glance-db-sync-zwkg2\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.533141 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-config-data\") pod \"glance-db-sync-zwkg2\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.533252 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfwrf\" (UniqueName: \"kubernetes.io/projected/448a4a0f-a469-415f-8dcc-6223ee884c29-kube-api-access-vfwrf\") pod \"glance-db-sync-zwkg2\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.533285 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-db-sync-config-data\") pod \"glance-db-sync-zwkg2\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.538951 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-combined-ca-bundle\") pod \"glance-db-sync-zwkg2\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.539090 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-db-sync-config-data\") pod \"glance-db-sync-zwkg2\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.540477 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-config-data\") pod \"glance-db-sync-zwkg2\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.561732 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfwrf\" (UniqueName: \"kubernetes.io/projected/448a4a0f-a469-415f-8dcc-6223ee884c29-kube-api-access-vfwrf\") pod \"glance-db-sync-zwkg2\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.699569 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:05 crc kubenswrapper[4797]: I0216 11:25:05.767264 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-dht7z" Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.419759 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-zwkg2"] Feb 16 11:25:06 crc kubenswrapper[4797]: W0216 11:25:06.429247 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod448a4a0f_a469_415f_8dcc_6223ee884c29.slice/crio-0d709728ee8ee7ea07112c6b80d3950d0a10b3e98da5dca082e02acce8a3fcdb WatchSource:0}: Error finding container 0d709728ee8ee7ea07112c6b80d3950d0a10b3e98da5dca082e02acce8a3fcdb: Status 404 returned error can't find the container with id 0d709728ee8ee7ea07112c6b80d3950d0a10b3e98da5dca082e02acce8a3fcdb Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.562875 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.745162 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-hlkwq"] Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.746659 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hlkwq" Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.754543 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8455a08f-921f-44b1-a66b-b8ac256526d9-operator-scripts\") pod \"root-account-create-update-hlkwq\" (UID: \"8455a08f-921f-44b1-a66b-b8ac256526d9\") " pod="openstack/root-account-create-update-hlkwq" Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.754661 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvjwt\" (UniqueName: \"kubernetes.io/projected/8455a08f-921f-44b1-a66b-b8ac256526d9-kube-api-access-dvjwt\") pod \"root-account-create-update-hlkwq\" (UID: \"8455a08f-921f-44b1-a66b-b8ac256526d9\") " pod="openstack/root-account-create-update-hlkwq" Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.755549 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.772787 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hlkwq"] Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.792781 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.856108 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8455a08f-921f-44b1-a66b-b8ac256526d9-operator-scripts\") pod \"root-account-create-update-hlkwq\" (UID: \"8455a08f-921f-44b1-a66b-b8ac256526d9\") " pod="openstack/root-account-create-update-hlkwq" Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.856164 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvjwt\" (UniqueName: \"kubernetes.io/projected/8455a08f-921f-44b1-a66b-b8ac256526d9-kube-api-access-dvjwt\") pod \"root-account-create-update-hlkwq\" (UID: \"8455a08f-921f-44b1-a66b-b8ac256526d9\") " pod="openstack/root-account-create-update-hlkwq" Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.856897 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8455a08f-921f-44b1-a66b-b8ac256526d9-operator-scripts\") pod \"root-account-create-update-hlkwq\" (UID: \"8455a08f-921f-44b1-a66b-b8ac256526d9\") " pod="openstack/root-account-create-update-hlkwq" Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.913227 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvjwt\" (UniqueName: \"kubernetes.io/projected/8455a08f-921f-44b1-a66b-b8ac256526d9-kube-api-access-dvjwt\") pod \"root-account-create-update-hlkwq\" (UID: \"8455a08f-921f-44b1-a66b-b8ac256526d9\") " pod="openstack/root-account-create-update-hlkwq" Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.976348 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-t99zf"] Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.977857 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-t99zf" Feb 16 11:25:06 crc kubenswrapper[4797]: I0216 11:25:06.995633 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-t99zf"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.019993 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zwkg2" event={"ID":"448a4a0f-a469-415f-8dcc-6223ee884c29","Type":"ContainerStarted","Data":"0d709728ee8ee7ea07112c6b80d3950d0a10b3e98da5dca082e02acce8a3fcdb"} Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.069403 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hlkwq" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.085740 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-fk95f"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.087287 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fk95f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.097150 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fk95f"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.166042 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b289e9a1-2299-4c30-8a6a-ac125a3342ca-operator-scripts\") pod \"cloudkitty-db-create-t99zf\" (UID: \"b289e9a1-2299-4c30-8a6a-ac125a3342ca\") " pod="openstack/cloudkitty-db-create-t99zf" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.166239 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cxx9\" (UniqueName: \"kubernetes.io/projected/b289e9a1-2299-4c30-8a6a-ac125a3342ca-kube-api-access-2cxx9\") pod \"cloudkitty-db-create-t99zf\" (UID: \"b289e9a1-2299-4c30-8a6a-ac125a3342ca\") " pod="openstack/cloudkitty-db-create-t99zf" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.189222 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-a1eb-account-create-update-d8ls4"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.193712 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a1eb-account-create-update-d8ls4" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.221231 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.236294 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a1eb-account-create-update-d8ls4"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.268624 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cxx9\" (UniqueName: \"kubernetes.io/projected/b289e9a1-2299-4c30-8a6a-ac125a3342ca-kube-api-access-2cxx9\") pod \"cloudkitty-db-create-t99zf\" (UID: \"b289e9a1-2299-4c30-8a6a-ac125a3342ca\") " pod="openstack/cloudkitty-db-create-t99zf" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.268712 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b289e9a1-2299-4c30-8a6a-ac125a3342ca-operator-scripts\") pod \"cloudkitty-db-create-t99zf\" (UID: \"b289e9a1-2299-4c30-8a6a-ac125a3342ca\") " pod="openstack/cloudkitty-db-create-t99zf" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.268789 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e-operator-scripts\") pod \"cinder-db-create-fk95f\" (UID: \"74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e\") " pod="openstack/cinder-db-create-fk95f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.268825 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gkcc\" (UniqueName: \"kubernetes.io/projected/74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e-kube-api-access-6gkcc\") pod \"cinder-db-create-fk95f\" (UID: \"74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e\") " pod="openstack/cinder-db-create-fk95f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.269663 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b289e9a1-2299-4c30-8a6a-ac125a3342ca-operator-scripts\") pod \"cloudkitty-db-create-t99zf\" (UID: \"b289e9a1-2299-4c30-8a6a-ac125a3342ca\") " pod="openstack/cloudkitty-db-create-t99zf" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.288572 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-5266-account-create-update-bkcfj"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.289772 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-5266-account-create-update-bkcfj" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.299813 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.338727 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-5266-account-create-update-bkcfj"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.343334 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cxx9\" (UniqueName: \"kubernetes.io/projected/b289e9a1-2299-4c30-8a6a-ac125a3342ca-kube-api-access-2cxx9\") pod \"cloudkitty-db-create-t99zf\" (UID: \"b289e9a1-2299-4c30-8a6a-ac125a3342ca\") " pod="openstack/cloudkitty-db-create-t99zf" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.371851 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e-operator-scripts\") pod \"cinder-db-create-fk95f\" (UID: \"74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e\") " pod="openstack/cinder-db-create-fk95f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.371911 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf2gm\" (UniqueName: \"kubernetes.io/projected/ca5b2e47-863e-424d-9dd6-d8ed4b9e518e-kube-api-access-lf2gm\") pod \"cinder-a1eb-account-create-update-d8ls4\" (UID: \"ca5b2e47-863e-424d-9dd6-d8ed4b9e518e\") " pod="openstack/cinder-a1eb-account-create-update-d8ls4" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.371942 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gkcc\" (UniqueName: \"kubernetes.io/projected/74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e-kube-api-access-6gkcc\") pod \"cinder-db-create-fk95f\" (UID: \"74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e\") " pod="openstack/cinder-db-create-fk95f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.372060 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca5b2e47-863e-424d-9dd6-d8ed4b9e518e-operator-scripts\") pod \"cinder-a1eb-account-create-update-d8ls4\" (UID: \"ca5b2e47-863e-424d-9dd6-d8ed4b9e518e\") " pod="openstack/cinder-a1eb-account-create-update-d8ls4" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.372786 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e-operator-scripts\") pod \"cinder-db-create-fk95f\" (UID: \"74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e\") " pod="openstack/cinder-db-create-fk95f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.454190 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gkcc\" (UniqueName: \"kubernetes.io/projected/74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e-kube-api-access-6gkcc\") pod \"cinder-db-create-fk95f\" (UID: \"74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e\") " pod="openstack/cinder-db-create-fk95f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.466475 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-gbv4g"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.473698 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca5b2e47-863e-424d-9dd6-d8ed4b9e518e-operator-scripts\") pod \"cinder-a1eb-account-create-update-d8ls4\" (UID: \"ca5b2e47-863e-424d-9dd6-d8ed4b9e518e\") " pod="openstack/cinder-a1eb-account-create-update-d8ls4" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.473751 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwb8c\" (UniqueName: \"kubernetes.io/projected/d13f8337-bf62-4444-bb7b-fbb9699373d4-kube-api-access-gwb8c\") pod \"cloudkitty-5266-account-create-update-bkcfj\" (UID: \"d13f8337-bf62-4444-bb7b-fbb9699373d4\") " pod="openstack/cloudkitty-5266-account-create-update-bkcfj" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.473801 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf2gm\" (UniqueName: \"kubernetes.io/projected/ca5b2e47-863e-424d-9dd6-d8ed4b9e518e-kube-api-access-lf2gm\") pod \"cinder-a1eb-account-create-update-d8ls4\" (UID: \"ca5b2e47-863e-424d-9dd6-d8ed4b9e518e\") " pod="openstack/cinder-a1eb-account-create-update-d8ls4" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.473867 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13f8337-bf62-4444-bb7b-fbb9699373d4-operator-scripts\") pod \"cloudkitty-5266-account-create-update-bkcfj\" (UID: \"d13f8337-bf62-4444-bb7b-fbb9699373d4\") " pod="openstack/cloudkitty-5266-account-create-update-bkcfj" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.474626 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca5b2e47-863e-424d-9dd6-d8ed4b9e518e-operator-scripts\") pod \"cinder-a1eb-account-create-update-d8ls4\" (UID: \"ca5b2e47-863e-424d-9dd6-d8ed4b9e518e\") " pod="openstack/cinder-a1eb-account-create-update-d8ls4" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.479330 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gbv4g" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.485672 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.485922 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-kpt68" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.486069 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.486171 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.492050 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf2gm\" (UniqueName: \"kubernetes.io/projected/ca5b2e47-863e-424d-9dd6-d8ed4b9e518e-kube-api-access-lf2gm\") pod \"cinder-a1eb-account-create-update-d8ls4\" (UID: \"ca5b2e47-863e-424d-9dd6-d8ed4b9e518e\") " pod="openstack/cinder-a1eb-account-create-update-d8ls4" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.505943 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gbv4g"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.520753 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7460-account-create-update-87l9f"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.522089 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7460-account-create-update-87l9f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.526242 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.534278 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7460-account-create-update-87l9f"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.576035 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a1eb-account-create-update-d8ls4" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.578619 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e57501-f0cf-48c7-831e-d6782b7c1037-operator-scripts\") pod \"neutron-7460-account-create-update-87l9f\" (UID: \"67e57501-f0cf-48c7-831e-d6782b7c1037\") " pod="openstack/neutron-7460-account-create-update-87l9f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.578689 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwb8c\" (UniqueName: \"kubernetes.io/projected/d13f8337-bf62-4444-bb7b-fbb9699373d4-kube-api-access-gwb8c\") pod \"cloudkitty-5266-account-create-update-bkcfj\" (UID: \"d13f8337-bf62-4444-bb7b-fbb9699373d4\") " pod="openstack/cloudkitty-5266-account-create-update-bkcfj" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.578834 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13f8337-bf62-4444-bb7b-fbb9699373d4-operator-scripts\") pod \"cloudkitty-5266-account-create-update-bkcfj\" (UID: \"d13f8337-bf62-4444-bb7b-fbb9699373d4\") " pod="openstack/cloudkitty-5266-account-create-update-bkcfj" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.578879 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn4b6\" (UniqueName: \"kubernetes.io/projected/54f56706-9d2d-4034-ab0d-ed5023bdde18-kube-api-access-rn4b6\") pod \"keystone-db-sync-gbv4g\" (UID: \"54f56706-9d2d-4034-ab0d-ed5023bdde18\") " pod="openstack/keystone-db-sync-gbv4g" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.578938 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f56706-9d2d-4034-ab0d-ed5023bdde18-config-data\") pod \"keystone-db-sync-gbv4g\" (UID: \"54f56706-9d2d-4034-ab0d-ed5023bdde18\") " pod="openstack/keystone-db-sync-gbv4g" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.579004 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjd8s\" (UniqueName: \"kubernetes.io/projected/67e57501-f0cf-48c7-831e-d6782b7c1037-kube-api-access-sjd8s\") pod \"neutron-7460-account-create-update-87l9f\" (UID: \"67e57501-f0cf-48c7-831e-d6782b7c1037\") " pod="openstack/neutron-7460-account-create-update-87l9f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.579053 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f56706-9d2d-4034-ab0d-ed5023bdde18-combined-ca-bundle\") pod \"keystone-db-sync-gbv4g\" (UID: \"54f56706-9d2d-4034-ab0d-ed5023bdde18\") " pod="openstack/keystone-db-sync-gbv4g" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.579803 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13f8337-bf62-4444-bb7b-fbb9699373d4-operator-scripts\") pod \"cloudkitty-5266-account-create-update-bkcfj\" (UID: \"d13f8337-bf62-4444-bb7b-fbb9699373d4\") " pod="openstack/cloudkitty-5266-account-create-update-bkcfj" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.596885 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-zfrh2"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.632107 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-t99zf" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.643752 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zfrh2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.650237 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwb8c\" (UniqueName: \"kubernetes.io/projected/d13f8337-bf62-4444-bb7b-fbb9699373d4-kube-api-access-gwb8c\") pod \"cloudkitty-5266-account-create-update-bkcfj\" (UID: \"d13f8337-bf62-4444-bb7b-fbb9699373d4\") " pod="openstack/cloudkitty-5266-account-create-update-bkcfj" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.662418 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-5266-account-create-update-bkcfj" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.682144 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjd8s\" (UniqueName: \"kubernetes.io/projected/67e57501-f0cf-48c7-831e-d6782b7c1037-kube-api-access-sjd8s\") pod \"neutron-7460-account-create-update-87l9f\" (UID: \"67e57501-f0cf-48c7-831e-d6782b7c1037\") " pod="openstack/neutron-7460-account-create-update-87l9f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.682214 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f56706-9d2d-4034-ab0d-ed5023bdde18-combined-ca-bundle\") pod \"keystone-db-sync-gbv4g\" (UID: \"54f56706-9d2d-4034-ab0d-ed5023bdde18\") " pod="openstack/keystone-db-sync-gbv4g" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.682275 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e57501-f0cf-48c7-831e-d6782b7c1037-operator-scripts\") pod \"neutron-7460-account-create-update-87l9f\" (UID: \"67e57501-f0cf-48c7-831e-d6782b7c1037\") " pod="openstack/neutron-7460-account-create-update-87l9f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.682373 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn4b6\" (UniqueName: \"kubernetes.io/projected/54f56706-9d2d-4034-ab0d-ed5023bdde18-kube-api-access-rn4b6\") pod \"keystone-db-sync-gbv4g\" (UID: \"54f56706-9d2d-4034-ab0d-ed5023bdde18\") " pod="openstack/keystone-db-sync-gbv4g" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.682412 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f56706-9d2d-4034-ab0d-ed5023bdde18-config-data\") pod \"keystone-db-sync-gbv4g\" (UID: \"54f56706-9d2d-4034-ab0d-ed5023bdde18\") " pod="openstack/keystone-db-sync-gbv4g" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.692810 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e57501-f0cf-48c7-831e-d6782b7c1037-operator-scripts\") pod \"neutron-7460-account-create-update-87l9f\" (UID: \"67e57501-f0cf-48c7-831e-d6782b7c1037\") " pod="openstack/neutron-7460-account-create-update-87l9f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.693169 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f56706-9d2d-4034-ab0d-ed5023bdde18-config-data\") pod \"keystone-db-sync-gbv4g\" (UID: \"54f56706-9d2d-4034-ab0d-ed5023bdde18\") " pod="openstack/keystone-db-sync-gbv4g" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.695556 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zfrh2"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.697785 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f56706-9d2d-4034-ab0d-ed5023bdde18-combined-ca-bundle\") pod \"keystone-db-sync-gbv4g\" (UID: \"54f56706-9d2d-4034-ab0d-ed5023bdde18\") " pod="openstack/keystone-db-sync-gbv4g" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.717626 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fk95f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.721216 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-pmnbl"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.723880 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pmnbl" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.725044 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn4b6\" (UniqueName: \"kubernetes.io/projected/54f56706-9d2d-4034-ab0d-ed5023bdde18-kube-api-access-rn4b6\") pod \"keystone-db-sync-gbv4g\" (UID: \"54f56706-9d2d-4034-ab0d-ed5023bdde18\") " pod="openstack/keystone-db-sync-gbv4g" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.725770 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjd8s\" (UniqueName: \"kubernetes.io/projected/67e57501-f0cf-48c7-831e-d6782b7c1037-kube-api-access-sjd8s\") pod \"neutron-7460-account-create-update-87l9f\" (UID: \"67e57501-f0cf-48c7-831e-d6782b7c1037\") " pod="openstack/neutron-7460-account-create-update-87l9f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.752915 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-pmnbl"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.761188 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-a80b-account-create-update-swbm2"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.763659 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a80b-account-create-update-swbm2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.768236 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.773712 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a80b-account-create-update-swbm2"] Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.786926 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa1b2e9-8e1b-4a90-aae6-49476b717d71-operator-scripts\") pod \"neutron-db-create-zfrh2\" (UID: \"afa1b2e9-8e1b-4a90-aae6-49476b717d71\") " pod="openstack/neutron-db-create-zfrh2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.786982 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jgbt\" (UniqueName: \"kubernetes.io/projected/afa1b2e9-8e1b-4a90-aae6-49476b717d71-kube-api-access-5jgbt\") pod \"neutron-db-create-zfrh2\" (UID: \"afa1b2e9-8e1b-4a90-aae6-49476b717d71\") " pod="openstack/neutron-db-create-zfrh2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.815172 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gbv4g" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.850075 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7460-account-create-update-87l9f" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.889629 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtw99\" (UniqueName: \"kubernetes.io/projected/6650dd6b-74e9-407a-8690-6845e881427f-kube-api-access-wtw99\") pod \"barbican-a80b-account-create-update-swbm2\" (UID: \"6650dd6b-74e9-407a-8690-6845e881427f\") " pod="openstack/barbican-a80b-account-create-update-swbm2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.889708 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa1b2e9-8e1b-4a90-aae6-49476b717d71-operator-scripts\") pod \"neutron-db-create-zfrh2\" (UID: \"afa1b2e9-8e1b-4a90-aae6-49476b717d71\") " pod="openstack/neutron-db-create-zfrh2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.890275 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jgbt\" (UniqueName: \"kubernetes.io/projected/afa1b2e9-8e1b-4a90-aae6-49476b717d71-kube-api-access-5jgbt\") pod \"neutron-db-create-zfrh2\" (UID: \"afa1b2e9-8e1b-4a90-aae6-49476b717d71\") " pod="openstack/neutron-db-create-zfrh2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.890448 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e31bcde1-c735-4e57-907d-2876334827d6-operator-scripts\") pod \"barbican-db-create-pmnbl\" (UID: \"e31bcde1-c735-4e57-907d-2876334827d6\") " pod="openstack/barbican-db-create-pmnbl" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.890482 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqnlr\" (UniqueName: \"kubernetes.io/projected/e31bcde1-c735-4e57-907d-2876334827d6-kube-api-access-xqnlr\") pod \"barbican-db-create-pmnbl\" (UID: \"e31bcde1-c735-4e57-907d-2876334827d6\") " pod="openstack/barbican-db-create-pmnbl" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.890519 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6650dd6b-74e9-407a-8690-6845e881427f-operator-scripts\") pod \"barbican-a80b-account-create-update-swbm2\" (UID: \"6650dd6b-74e9-407a-8690-6845e881427f\") " pod="openstack/barbican-a80b-account-create-update-swbm2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.890647 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa1b2e9-8e1b-4a90-aae6-49476b717d71-operator-scripts\") pod \"neutron-db-create-zfrh2\" (UID: \"afa1b2e9-8e1b-4a90-aae6-49476b717d71\") " pod="openstack/neutron-db-create-zfrh2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.913285 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jgbt\" (UniqueName: \"kubernetes.io/projected/afa1b2e9-8e1b-4a90-aae6-49476b717d71-kube-api-access-5jgbt\") pod \"neutron-db-create-zfrh2\" (UID: \"afa1b2e9-8e1b-4a90-aae6-49476b717d71\") " pod="openstack/neutron-db-create-zfrh2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.987918 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zfrh2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.994454 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtw99\" (UniqueName: \"kubernetes.io/projected/6650dd6b-74e9-407a-8690-6845e881427f-kube-api-access-wtw99\") pod \"barbican-a80b-account-create-update-swbm2\" (UID: \"6650dd6b-74e9-407a-8690-6845e881427f\") " pod="openstack/barbican-a80b-account-create-update-swbm2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.994613 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e31bcde1-c735-4e57-907d-2876334827d6-operator-scripts\") pod \"barbican-db-create-pmnbl\" (UID: \"e31bcde1-c735-4e57-907d-2876334827d6\") " pod="openstack/barbican-db-create-pmnbl" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.995355 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqnlr\" (UniqueName: \"kubernetes.io/projected/e31bcde1-c735-4e57-907d-2876334827d6-kube-api-access-xqnlr\") pod \"barbican-db-create-pmnbl\" (UID: \"e31bcde1-c735-4e57-907d-2876334827d6\") " pod="openstack/barbican-db-create-pmnbl" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.995384 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6650dd6b-74e9-407a-8690-6845e881427f-operator-scripts\") pod \"barbican-a80b-account-create-update-swbm2\" (UID: \"6650dd6b-74e9-407a-8690-6845e881427f\") " pod="openstack/barbican-a80b-account-create-update-swbm2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.997067 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6650dd6b-74e9-407a-8690-6845e881427f-operator-scripts\") pod \"barbican-a80b-account-create-update-swbm2\" (UID: \"6650dd6b-74e9-407a-8690-6845e881427f\") " pod="openstack/barbican-a80b-account-create-update-swbm2" Feb 16 11:25:07 crc kubenswrapper[4797]: I0216 11:25:07.995295 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e31bcde1-c735-4e57-907d-2876334827d6-operator-scripts\") pod \"barbican-db-create-pmnbl\" (UID: \"e31bcde1-c735-4e57-907d-2876334827d6\") " pod="openstack/barbican-db-create-pmnbl" Feb 16 11:25:08 crc kubenswrapper[4797]: I0216 11:25:08.028009 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtw99\" (UniqueName: \"kubernetes.io/projected/6650dd6b-74e9-407a-8690-6845e881427f-kube-api-access-wtw99\") pod \"barbican-a80b-account-create-update-swbm2\" (UID: \"6650dd6b-74e9-407a-8690-6845e881427f\") " pod="openstack/barbican-a80b-account-create-update-swbm2" Feb 16 11:25:08 crc kubenswrapper[4797]: I0216 11:25:08.028888 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-hlkwq"] Feb 16 11:25:08 crc kubenswrapper[4797]: I0216 11:25:08.036501 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqnlr\" (UniqueName: \"kubernetes.io/projected/e31bcde1-c735-4e57-907d-2876334827d6-kube-api-access-xqnlr\") pod \"barbican-db-create-pmnbl\" (UID: \"e31bcde1-c735-4e57-907d-2876334827d6\") " pod="openstack/barbican-db-create-pmnbl" Feb 16 11:25:08 crc kubenswrapper[4797]: I0216 11:25:08.079340 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pmnbl" Feb 16 11:25:08 crc kubenswrapper[4797]: I0216 11:25:08.091543 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a80b-account-create-update-swbm2" Feb 16 11:25:08 crc kubenswrapper[4797]: I0216 11:25:08.282106 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-a1eb-account-create-update-d8ls4"] Feb 16 11:25:08 crc kubenswrapper[4797]: I0216 11:25:08.312781 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:08 crc kubenswrapper[4797]: I0216 11:25:08.318615 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:08 crc kubenswrapper[4797]: W0216 11:25:08.335593 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca5b2e47_863e_424d_9dd6_d8ed4b9e518e.slice/crio-cf4d81c70036c1c541ef7721b794dc5410a51a282a6bbb8b4fc0355984d7230d WatchSource:0}: Error finding container cf4d81c70036c1c541ef7721b794dc5410a51a282a6bbb8b4fc0355984d7230d: Status 404 returned error can't find the container with id cf4d81c70036c1c541ef7721b794dc5410a51a282a6bbb8b4fc0355984d7230d Feb 16 11:25:08 crc kubenswrapper[4797]: I0216 11:25:08.752531 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-t99zf"] Feb 16 11:25:08 crc kubenswrapper[4797]: I0216 11:25:08.763540 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-fk95f"] Feb 16 11:25:08 crc kubenswrapper[4797]: W0216 11:25:08.779012 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74f1a3eb_a2b3_4df1_9ff8_0dd525ea746e.slice/crio-3b8bd5479e8137f7b826c1e6dfab459ceceeec1d847c91686cf471bba0102f67 WatchSource:0}: Error finding container 3b8bd5479e8137f7b826c1e6dfab459ceceeec1d847c91686cf471bba0102f67: Status 404 returned error can't find the container with id 3b8bd5479e8137f7b826c1e6dfab459ceceeec1d847c91686cf471bba0102f67 Feb 16 11:25:08 crc kubenswrapper[4797]: W0216 11:25:08.783009 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb289e9a1_2299_4c30_8a6a_ac125a3342ca.slice/crio-c4bafce3000e5d8d583e92f3cfa349213d15a1a7647ab913fa86c398ab2e6a70 WatchSource:0}: Error finding container c4bafce3000e5d8d583e92f3cfa349213d15a1a7647ab913fa86c398ab2e6a70: Status 404 returned error can't find the container with id c4bafce3000e5d8d583e92f3cfa349213d15a1a7647ab913fa86c398ab2e6a70 Feb 16 11:25:08 crc kubenswrapper[4797]: I0216 11:25:08.800321 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-5266-account-create-update-bkcfj"] Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:08.868541 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-zfrh2"] Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:08.882661 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7460-account-create-update-87l9f"] Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:08.892307 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-gbv4g"] Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:08.904973 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-pmnbl"] Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.056890 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pmnbl" event={"ID":"e31bcde1-c735-4e57-907d-2876334827d6","Type":"ContainerStarted","Data":"6d0498c7fa295d801ed51c6e785aba5b2c9b53d446f462f6981b2da6153c0a88"} Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.057863 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gbv4g" event={"ID":"54f56706-9d2d-4034-ab0d-ed5023bdde18","Type":"ContainerStarted","Data":"41c8219fbfe6036e698050ac9c636999203dd2bd80721f22de9b99affa4fc69a"} Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.060546 4797 generic.go:334] "Generic (PLEG): container finished" podID="8455a08f-921f-44b1-a66b-b8ac256526d9" containerID="f2a9ed58a09ed80438a40c91a5656cac8119e4dab548ee7251ce5374121429a3" exitCode=0 Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.060623 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hlkwq" event={"ID":"8455a08f-921f-44b1-a66b-b8ac256526d9","Type":"ContainerDied","Data":"f2a9ed58a09ed80438a40c91a5656cac8119e4dab548ee7251ce5374121429a3"} Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.060654 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hlkwq" event={"ID":"8455a08f-921f-44b1-a66b-b8ac256526d9","Type":"ContainerStarted","Data":"53085f7d273053770992980c104d25e8b8c8a3edc5f8c097cb6cdb60264e19f0"} Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.067253 4797 generic.go:334] "Generic (PLEG): container finished" podID="ca5b2e47-863e-424d-9dd6-d8ed4b9e518e" containerID="d5e0ed896d60ef2405447c3bfb2ff06086da5dd11d0c951c36562714c7b6e738" exitCode=0 Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.067323 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a1eb-account-create-update-d8ls4" event={"ID":"ca5b2e47-863e-424d-9dd6-d8ed4b9e518e","Type":"ContainerDied","Data":"d5e0ed896d60ef2405447c3bfb2ff06086da5dd11d0c951c36562714c7b6e738"} Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.067382 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a1eb-account-create-update-d8ls4" event={"ID":"ca5b2e47-863e-424d-9dd6-d8ed4b9e518e","Type":"ContainerStarted","Data":"cf4d81c70036c1c541ef7721b794dc5410a51a282a6bbb8b4fc0355984d7230d"} Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.069093 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-t99zf" event={"ID":"b289e9a1-2299-4c30-8a6a-ac125a3342ca","Type":"ContainerStarted","Data":"c4bafce3000e5d8d583e92f3cfa349213d15a1a7647ab913fa86c398ab2e6a70"} Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.081873 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-5266-account-create-update-bkcfj" event={"ID":"d13f8337-bf62-4444-bb7b-fbb9699373d4","Type":"ContainerStarted","Data":"ba5157d42a9a041f46ada331ad075c0d2d7f8da193278a68a62a1150f5cc1aa2"} Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.082750 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-a80b-account-create-update-swbm2"] Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.084997 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fk95f" event={"ID":"74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e","Type":"ContainerStarted","Data":"3b8bd5479e8137f7b826c1e6dfab459ceceeec1d847c91686cf471bba0102f67"} Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.095185 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zfrh2" event={"ID":"afa1b2e9-8e1b-4a90-aae6-49476b717d71","Type":"ContainerStarted","Data":"6a2de9ec8384e40053713618286c895123977b03f3ef21976d108b82ff834466"} Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.105908 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7460-account-create-update-87l9f" event={"ID":"67e57501-f0cf-48c7-831e-d6782b7c1037","Type":"ContainerStarted","Data":"09725f7978f4194acea9d8044f70b3c4b18c3aa27091e97583472d60b1198650"} Feb 16 11:25:09 crc kubenswrapper[4797]: I0216 11:25:09.117361 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.117905 4797 generic.go:334] "Generic (PLEG): container finished" podID="afa1b2e9-8e1b-4a90-aae6-49476b717d71" containerID="233da805d3129489b8b04771c283b0a723e20d5423f5499fd319cf1902de9f30" exitCode=0 Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.118232 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zfrh2" event={"ID":"afa1b2e9-8e1b-4a90-aae6-49476b717d71","Type":"ContainerDied","Data":"233da805d3129489b8b04771c283b0a723e20d5423f5499fd319cf1902de9f30"} Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.121537 4797 generic.go:334] "Generic (PLEG): container finished" podID="e31bcde1-c735-4e57-907d-2876334827d6" containerID="5af453300b23a63287567169a3dcbb9b7b44a69741b5dc211b1a7bc79b6b5807" exitCode=0 Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.121697 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pmnbl" event={"ID":"e31bcde1-c735-4e57-907d-2876334827d6","Type":"ContainerDied","Data":"5af453300b23a63287567169a3dcbb9b7b44a69741b5dc211b1a7bc79b6b5807"} Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.123808 4797 generic.go:334] "Generic (PLEG): container finished" podID="67e57501-f0cf-48c7-831e-d6782b7c1037" containerID="770a6afe839bb16ba5d591bb9a43c00a3bb8b1ffdf84b36d04aa03e6352d2c57" exitCode=0 Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.123865 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7460-account-create-update-87l9f" event={"ID":"67e57501-f0cf-48c7-831e-d6782b7c1037","Type":"ContainerDied","Data":"770a6afe839bb16ba5d591bb9a43c00a3bb8b1ffdf84b36d04aa03e6352d2c57"} Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.125436 4797 generic.go:334] "Generic (PLEG): container finished" podID="6650dd6b-74e9-407a-8690-6845e881427f" containerID="d8a376174d977c9cae8f858563d6c7cee2850c184b870c353becf3fe2b51dcba" exitCode=0 Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.125490 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a80b-account-create-update-swbm2" event={"ID":"6650dd6b-74e9-407a-8690-6845e881427f","Type":"ContainerDied","Data":"d8a376174d977c9cae8f858563d6c7cee2850c184b870c353becf3fe2b51dcba"} Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.125510 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a80b-account-create-update-swbm2" event={"ID":"6650dd6b-74e9-407a-8690-6845e881427f","Type":"ContainerStarted","Data":"64be4a61beea983731b66464d111e6bb63a064a2f1f65a72d3460bfa8b2a1a7f"} Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.127683 4797 generic.go:334] "Generic (PLEG): container finished" podID="b289e9a1-2299-4c30-8a6a-ac125a3342ca" containerID="d8dadc3d00c78202bdaac763f8fa2ec67cd6618de8bf87d82d6536d85ffaad28" exitCode=0 Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.127737 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-t99zf" event={"ID":"b289e9a1-2299-4c30-8a6a-ac125a3342ca","Type":"ContainerDied","Data":"d8dadc3d00c78202bdaac763f8fa2ec67cd6618de8bf87d82d6536d85ffaad28"} Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.129760 4797 generic.go:334] "Generic (PLEG): container finished" podID="d13f8337-bf62-4444-bb7b-fbb9699373d4" containerID="ec1f87adf21f050635a4a4bbc350991fa185bde0cb5f7bd4ced052beff136436" exitCode=0 Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.129806 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-5266-account-create-update-bkcfj" event={"ID":"d13f8337-bf62-4444-bb7b-fbb9699373d4","Type":"ContainerDied","Data":"ec1f87adf21f050635a4a4bbc350991fa185bde0cb5f7bd4ced052beff136436"} Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.141874 4797 generic.go:334] "Generic (PLEG): container finished" podID="74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e" containerID="e7910a16b83089268576c8da00999f2414abb2e3fa40a8c0c211db850afc4a71" exitCode=0 Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.142300 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fk95f" event={"ID":"74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e","Type":"ContainerDied","Data":"e7910a16b83089268576c8da00999f2414abb2e3fa40a8c0c211db850afc4a71"} Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.691266 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hlkwq" Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.702030 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a1eb-account-create-update-d8ls4" Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.771051 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvjwt\" (UniqueName: \"kubernetes.io/projected/8455a08f-921f-44b1-a66b-b8ac256526d9-kube-api-access-dvjwt\") pod \"8455a08f-921f-44b1-a66b-b8ac256526d9\" (UID: \"8455a08f-921f-44b1-a66b-b8ac256526d9\") " Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.771468 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8455a08f-921f-44b1-a66b-b8ac256526d9-operator-scripts\") pod \"8455a08f-921f-44b1-a66b-b8ac256526d9\" (UID: \"8455a08f-921f-44b1-a66b-b8ac256526d9\") " Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.772070 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8455a08f-921f-44b1-a66b-b8ac256526d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8455a08f-921f-44b1-a66b-b8ac256526d9" (UID: "8455a08f-921f-44b1-a66b-b8ac256526d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.772340 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8455a08f-921f-44b1-a66b-b8ac256526d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.783353 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8455a08f-921f-44b1-a66b-b8ac256526d9-kube-api-access-dvjwt" (OuterVolumeSpecName: "kube-api-access-dvjwt") pod "8455a08f-921f-44b1-a66b-b8ac256526d9" (UID: "8455a08f-921f-44b1-a66b-b8ac256526d9"). InnerVolumeSpecName "kube-api-access-dvjwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.874877 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca5b2e47-863e-424d-9dd6-d8ed4b9e518e-operator-scripts\") pod \"ca5b2e47-863e-424d-9dd6-d8ed4b9e518e\" (UID: \"ca5b2e47-863e-424d-9dd6-d8ed4b9e518e\") " Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.875531 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf2gm\" (UniqueName: \"kubernetes.io/projected/ca5b2e47-863e-424d-9dd6-d8ed4b9e518e-kube-api-access-lf2gm\") pod \"ca5b2e47-863e-424d-9dd6-d8ed4b9e518e\" (UID: \"ca5b2e47-863e-424d-9dd6-d8ed4b9e518e\") " Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.875739 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca5b2e47-863e-424d-9dd6-d8ed4b9e518e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca5b2e47-863e-424d-9dd6-d8ed4b9e518e" (UID: "ca5b2e47-863e-424d-9dd6-d8ed4b9e518e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.877074 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca5b2e47-863e-424d-9dd6-d8ed4b9e518e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.877118 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvjwt\" (UniqueName: \"kubernetes.io/projected/8455a08f-921f-44b1-a66b-b8ac256526d9-kube-api-access-dvjwt\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.879417 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca5b2e47-863e-424d-9dd6-d8ed4b9e518e-kube-api-access-lf2gm" (OuterVolumeSpecName: "kube-api-access-lf2gm") pod "ca5b2e47-863e-424d-9dd6-d8ed4b9e518e" (UID: "ca5b2e47-863e-424d-9dd6-d8ed4b9e518e"). InnerVolumeSpecName "kube-api-access-lf2gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:10 crc kubenswrapper[4797]: I0216 11:25:10.978392 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf2gm\" (UniqueName: \"kubernetes.io/projected/ca5b2e47-863e-424d-9dd6-d8ed4b9e518e-kube-api-access-lf2gm\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.156608 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-hlkwq" event={"ID":"8455a08f-921f-44b1-a66b-b8ac256526d9","Type":"ContainerDied","Data":"53085f7d273053770992980c104d25e8b8c8a3edc5f8c097cb6cdb60264e19f0"} Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.156642 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-hlkwq" Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.156647 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53085f7d273053770992980c104d25e8b8c8a3edc5f8c097cb6cdb60264e19f0" Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.161346 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-a1eb-account-create-update-d8ls4" event={"ID":"ca5b2e47-863e-424d-9dd6-d8ed4b9e518e","Type":"ContainerDied","Data":"cf4d81c70036c1c541ef7721b794dc5410a51a282a6bbb8b4fc0355984d7230d"} Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.161369 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4d81c70036c1c541ef7721b794dc5410a51a282a6bbb8b4fc0355984d7230d" Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.161898 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-a1eb-account-create-update-d8ls4" Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.732490 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pmnbl" Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.846653 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.847101 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="prometheus" containerID="cri-o://16415f3ace1f92241ac1bc115d0fd48d6634facace715c2436c85e569e2a7a89" gracePeriod=600 Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.847742 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="thanos-sidecar" containerID="cri-o://02c91ff603dd2fbbb6758814df1561fcaeef0c774f6185d2cf42c6200762db7a" gracePeriod=600 Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.847829 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="config-reloader" containerID="cri-o://ba83bf3ef96f3074fa46a9c5ce77e7912ec1fa6b24485db7cb8cec10de2f8696" gracePeriod=600 Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.913252 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e31bcde1-c735-4e57-907d-2876334827d6-operator-scripts\") pod \"e31bcde1-c735-4e57-907d-2876334827d6\" (UID: \"e31bcde1-c735-4e57-907d-2876334827d6\") " Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.913441 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqnlr\" (UniqueName: \"kubernetes.io/projected/e31bcde1-c735-4e57-907d-2876334827d6-kube-api-access-xqnlr\") pod \"e31bcde1-c735-4e57-907d-2876334827d6\" (UID: \"e31bcde1-c735-4e57-907d-2876334827d6\") " Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.915243 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e31bcde1-c735-4e57-907d-2876334827d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e31bcde1-c735-4e57-907d-2876334827d6" (UID: "e31bcde1-c735-4e57-907d-2876334827d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:11 crc kubenswrapper[4797]: I0216 11:25:11.925380 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e31bcde1-c735-4e57-907d-2876334827d6-kube-api-access-xqnlr" (OuterVolumeSpecName: "kube-api-access-xqnlr") pod "e31bcde1-c735-4e57-907d-2876334827d6" (UID: "e31bcde1-c735-4e57-907d-2876334827d6"). InnerVolumeSpecName "kube-api-access-xqnlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.016438 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e31bcde1-c735-4e57-907d-2876334827d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.017168 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqnlr\" (UniqueName: \"kubernetes.io/projected/e31bcde1-c735-4e57-907d-2876334827d6-kube-api-access-xqnlr\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.185062 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-pmnbl" event={"ID":"e31bcde1-c735-4e57-907d-2876334827d6","Type":"ContainerDied","Data":"6d0498c7fa295d801ed51c6e785aba5b2c9b53d446f462f6981b2da6153c0a88"} Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.185126 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d0498c7fa295d801ed51c6e785aba5b2c9b53d446f462f6981b2da6153c0a88" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.185225 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-pmnbl" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.203636 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-a80b-account-create-update-swbm2" event={"ID":"6650dd6b-74e9-407a-8690-6845e881427f","Type":"ContainerDied","Data":"64be4a61beea983731b66464d111e6bb63a064a2f1f65a72d3460bfa8b2a1a7f"} Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.203708 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64be4a61beea983731b66464d111e6bb63a064a2f1f65a72d3460bfa8b2a1a7f" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.250662 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-t99zf" event={"ID":"b289e9a1-2299-4c30-8a6a-ac125a3342ca","Type":"ContainerDied","Data":"c4bafce3000e5d8d583e92f3cfa349213d15a1a7647ab913fa86c398ab2e6a70"} Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.250730 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4bafce3000e5d8d583e92f3cfa349213d15a1a7647ab913fa86c398ab2e6a70" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.264276 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-zfrh2" event={"ID":"afa1b2e9-8e1b-4a90-aae6-49476b717d71","Type":"ContainerDied","Data":"6a2de9ec8384e40053713618286c895123977b03f3ef21976d108b82ff834466"} Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.264348 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a2de9ec8384e40053713618286c895123977b03f3ef21976d108b82ff834466" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.271950 4797 generic.go:334] "Generic (PLEG): container finished" podID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerID="02c91ff603dd2fbbb6758814df1561fcaeef0c774f6185d2cf42c6200762db7a" exitCode=0 Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.271981 4797 generic.go:334] "Generic (PLEG): container finished" podID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerID="ba83bf3ef96f3074fa46a9c5ce77e7912ec1fa6b24485db7cb8cec10de2f8696" exitCode=0 Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.271989 4797 generic.go:334] "Generic (PLEG): container finished" podID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerID="16415f3ace1f92241ac1bc115d0fd48d6634facace715c2436c85e569e2a7a89" exitCode=0 Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.272013 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"113930a6-db19-4e43-bd2b-75ef1d11c021","Type":"ContainerDied","Data":"02c91ff603dd2fbbb6758814df1561fcaeef0c774f6185d2cf42c6200762db7a"} Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.272036 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"113930a6-db19-4e43-bd2b-75ef1d11c021","Type":"ContainerDied","Data":"ba83bf3ef96f3074fa46a9c5ce77e7912ec1fa6b24485db7cb8cec10de2f8696"} Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.272045 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"113930a6-db19-4e43-bd2b-75ef1d11c021","Type":"ContainerDied","Data":"16415f3ace1f92241ac1bc115d0fd48d6634facace715c2436c85e569e2a7a89"} Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.387261 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zfrh2" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.403952 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a80b-account-create-update-swbm2" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.411840 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-t99zf" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.424837 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7460-account-create-update-87l9f" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.444339 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fk95f" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.447071 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-5266-account-create-update-bkcfj" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.543502 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjd8s\" (UniqueName: \"kubernetes.io/projected/67e57501-f0cf-48c7-831e-d6782b7c1037-kube-api-access-sjd8s\") pod \"67e57501-f0cf-48c7-831e-d6782b7c1037\" (UID: \"67e57501-f0cf-48c7-831e-d6782b7c1037\") " Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.543547 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b289e9a1-2299-4c30-8a6a-ac125a3342ca-operator-scripts\") pod \"b289e9a1-2299-4c30-8a6a-ac125a3342ca\" (UID: \"b289e9a1-2299-4c30-8a6a-ac125a3342ca\") " Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.543606 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtw99\" (UniqueName: \"kubernetes.io/projected/6650dd6b-74e9-407a-8690-6845e881427f-kube-api-access-wtw99\") pod \"6650dd6b-74e9-407a-8690-6845e881427f\" (UID: \"6650dd6b-74e9-407a-8690-6845e881427f\") " Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.543634 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e57501-f0cf-48c7-831e-d6782b7c1037-operator-scripts\") pod \"67e57501-f0cf-48c7-831e-d6782b7c1037\" (UID: \"67e57501-f0cf-48c7-831e-d6782b7c1037\") " Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.543664 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jgbt\" (UniqueName: \"kubernetes.io/projected/afa1b2e9-8e1b-4a90-aae6-49476b717d71-kube-api-access-5jgbt\") pod \"afa1b2e9-8e1b-4a90-aae6-49476b717d71\" (UID: \"afa1b2e9-8e1b-4a90-aae6-49476b717d71\") " Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.543752 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e-operator-scripts\") pod \"74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e\" (UID: \"74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e\") " Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.543771 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6650dd6b-74e9-407a-8690-6845e881427f-operator-scripts\") pod \"6650dd6b-74e9-407a-8690-6845e881427f\" (UID: \"6650dd6b-74e9-407a-8690-6845e881427f\") " Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.543799 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa1b2e9-8e1b-4a90-aae6-49476b717d71-operator-scripts\") pod \"afa1b2e9-8e1b-4a90-aae6-49476b717d71\" (UID: \"afa1b2e9-8e1b-4a90-aae6-49476b717d71\") " Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.543834 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cxx9\" (UniqueName: \"kubernetes.io/projected/b289e9a1-2299-4c30-8a6a-ac125a3342ca-kube-api-access-2cxx9\") pod \"b289e9a1-2299-4c30-8a6a-ac125a3342ca\" (UID: \"b289e9a1-2299-4c30-8a6a-ac125a3342ca\") " Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.543849 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gkcc\" (UniqueName: \"kubernetes.io/projected/74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e-kube-api-access-6gkcc\") pod \"74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e\" (UID: \"74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e\") " Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.544785 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa1b2e9-8e1b-4a90-aae6-49476b717d71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afa1b2e9-8e1b-4a90-aae6-49476b717d71" (UID: "afa1b2e9-8e1b-4a90-aae6-49476b717d71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.544841 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b289e9a1-2299-4c30-8a6a-ac125a3342ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b289e9a1-2299-4c30-8a6a-ac125a3342ca" (UID: "b289e9a1-2299-4c30-8a6a-ac125a3342ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.544852 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6650dd6b-74e9-407a-8690-6845e881427f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6650dd6b-74e9-407a-8690-6845e881427f" (UID: "6650dd6b-74e9-407a-8690-6845e881427f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.544882 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e57501-f0cf-48c7-831e-d6782b7c1037-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67e57501-f0cf-48c7-831e-d6782b7c1037" (UID: "67e57501-f0cf-48c7-831e-d6782b7c1037"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.545350 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e" (UID: "74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.547630 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa1b2e9-8e1b-4a90-aae6-49476b717d71-kube-api-access-5jgbt" (OuterVolumeSpecName: "kube-api-access-5jgbt") pod "afa1b2e9-8e1b-4a90-aae6-49476b717d71" (UID: "afa1b2e9-8e1b-4a90-aae6-49476b717d71"). InnerVolumeSpecName "kube-api-access-5jgbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.548271 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6650dd6b-74e9-407a-8690-6845e881427f-kube-api-access-wtw99" (OuterVolumeSpecName: "kube-api-access-wtw99") pod "6650dd6b-74e9-407a-8690-6845e881427f" (UID: "6650dd6b-74e9-407a-8690-6845e881427f"). InnerVolumeSpecName "kube-api-access-wtw99". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.548496 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e-kube-api-access-6gkcc" (OuterVolumeSpecName: "kube-api-access-6gkcc") pod "74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e" (UID: "74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e"). InnerVolumeSpecName "kube-api-access-6gkcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.548724 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b289e9a1-2299-4c30-8a6a-ac125a3342ca-kube-api-access-2cxx9" (OuterVolumeSpecName: "kube-api-access-2cxx9") pod "b289e9a1-2299-4c30-8a6a-ac125a3342ca" (UID: "b289e9a1-2299-4c30-8a6a-ac125a3342ca"). InnerVolumeSpecName "kube-api-access-2cxx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.549196 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e57501-f0cf-48c7-831e-d6782b7c1037-kube-api-access-sjd8s" (OuterVolumeSpecName: "kube-api-access-sjd8s") pod "67e57501-f0cf-48c7-831e-d6782b7c1037" (UID: "67e57501-f0cf-48c7-831e-d6782b7c1037"). InnerVolumeSpecName "kube-api-access-sjd8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.645902 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwb8c\" (UniqueName: \"kubernetes.io/projected/d13f8337-bf62-4444-bb7b-fbb9699373d4-kube-api-access-gwb8c\") pod \"d13f8337-bf62-4444-bb7b-fbb9699373d4\" (UID: \"d13f8337-bf62-4444-bb7b-fbb9699373d4\") " Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.646005 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13f8337-bf62-4444-bb7b-fbb9699373d4-operator-scripts\") pod \"d13f8337-bf62-4444-bb7b-fbb9699373d4\" (UID: \"d13f8337-bf62-4444-bb7b-fbb9699373d4\") " Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.646546 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d13f8337-bf62-4444-bb7b-fbb9699373d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d13f8337-bf62-4444-bb7b-fbb9699373d4" (UID: "d13f8337-bf62-4444-bb7b-fbb9699373d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.647215 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtw99\" (UniqueName: \"kubernetes.io/projected/6650dd6b-74e9-407a-8690-6845e881427f-kube-api-access-wtw99\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.647242 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e57501-f0cf-48c7-831e-d6782b7c1037-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.647255 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jgbt\" (UniqueName: \"kubernetes.io/projected/afa1b2e9-8e1b-4a90-aae6-49476b717d71-kube-api-access-5jgbt\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.647267 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d13f8337-bf62-4444-bb7b-fbb9699373d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.647280 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.647291 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6650dd6b-74e9-407a-8690-6845e881427f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.647305 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afa1b2e9-8e1b-4a90-aae6-49476b717d71-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.647319 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cxx9\" (UniqueName: \"kubernetes.io/projected/b289e9a1-2299-4c30-8a6a-ac125a3342ca-kube-api-access-2cxx9\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.647332 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gkcc\" (UniqueName: \"kubernetes.io/projected/74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e-kube-api-access-6gkcc\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.647345 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjd8s\" (UniqueName: \"kubernetes.io/projected/67e57501-f0cf-48c7-831e-d6782b7c1037-kube-api-access-sjd8s\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.647357 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b289e9a1-2299-4c30-8a6a-ac125a3342ca-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.650690 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d13f8337-bf62-4444-bb7b-fbb9699373d4-kube-api-access-gwb8c" (OuterVolumeSpecName: "kube-api-access-gwb8c") pod "d13f8337-bf62-4444-bb7b-fbb9699373d4" (UID: "d13f8337-bf62-4444-bb7b-fbb9699373d4"). InnerVolumeSpecName "kube-api-access-gwb8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:12 crc kubenswrapper[4797]: I0216 11:25:12.749218 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwb8c\" (UniqueName: \"kubernetes.io/projected/d13f8337-bf62-4444-bb7b-fbb9699373d4-kube-api-access-gwb8c\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:13 crc kubenswrapper[4797]: I0216 11:25:13.284149 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-5266-account-create-update-bkcfj" event={"ID":"d13f8337-bf62-4444-bb7b-fbb9699373d4","Type":"ContainerDied","Data":"ba5157d42a9a041f46ada331ad075c0d2d7f8da193278a68a62a1150f5cc1aa2"} Feb 16 11:25:13 crc kubenswrapper[4797]: I0216 11:25:13.284447 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba5157d42a9a041f46ada331ad075c0d2d7f8da193278a68a62a1150f5cc1aa2" Feb 16 11:25:13 crc kubenswrapper[4797]: I0216 11:25:13.284185 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-5266-account-create-update-bkcfj" Feb 16 11:25:13 crc kubenswrapper[4797]: I0216 11:25:13.291984 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-fk95f" event={"ID":"74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e","Type":"ContainerDied","Data":"3b8bd5479e8137f7b826c1e6dfab459ceceeec1d847c91686cf471bba0102f67"} Feb 16 11:25:13 crc kubenswrapper[4797]: I0216 11:25:13.292031 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b8bd5479e8137f7b826c1e6dfab459ceceeec1d847c91686cf471bba0102f67" Feb 16 11:25:13 crc kubenswrapper[4797]: I0216 11:25:13.292109 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-fk95f" Feb 16 11:25:13 crc kubenswrapper[4797]: I0216 11:25:13.295004 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-a80b-account-create-update-swbm2" Feb 16 11:25:13 crc kubenswrapper[4797]: I0216 11:25:13.295434 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7460-account-create-update-87l9f" event={"ID":"67e57501-f0cf-48c7-831e-d6782b7c1037","Type":"ContainerDied","Data":"09725f7978f4194acea9d8044f70b3c4b18c3aa27091e97583472d60b1198650"} Feb 16 11:25:13 crc kubenswrapper[4797]: I0216 11:25:13.295491 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09725f7978f4194acea9d8044f70b3c4b18c3aa27091e97583472d60b1198650" Feb 16 11:25:13 crc kubenswrapper[4797]: I0216 11:25:13.295551 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7460-account-create-update-87l9f" Feb 16 11:25:13 crc kubenswrapper[4797]: I0216 11:25:13.295551 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-zfrh2" Feb 16 11:25:13 crc kubenswrapper[4797]: I0216 11:25:13.295826 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-t99zf" Feb 16 11:25:13 crc kubenswrapper[4797]: I0216 11:25:13.663824 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 11:25:15 crc kubenswrapper[4797]: I0216 11:25:15.202082 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:25:15 crc kubenswrapper[4797]: I0216 11:25:15.208275 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9f443541-845c-4fdd-b6d1-08aba5c39667-etc-swift\") pod \"swift-storage-0\" (UID: \"9f443541-845c-4fdd-b6d1-08aba5c39667\") " pod="openstack/swift-storage-0" Feb 16 11:25:15 crc kubenswrapper[4797]: I0216 11:25:15.457557 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 11:25:15 crc kubenswrapper[4797]: I0216 11:25:15.945194 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.123433 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-thanos-prometheus-http-client-file\") pod \"113930a6-db19-4e43-bd2b-75ef1d11c021\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.123670 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\") pod \"113930a6-db19-4e43-bd2b-75ef1d11c021\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.123752 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdp9s\" (UniqueName: \"kubernetes.io/projected/113930a6-db19-4e43-bd2b-75ef1d11c021-kube-api-access-gdp9s\") pod \"113930a6-db19-4e43-bd2b-75ef1d11c021\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.123777 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/113930a6-db19-4e43-bd2b-75ef1d11c021-config-out\") pod \"113930a6-db19-4e43-bd2b-75ef1d11c021\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.123824 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-config\") pod \"113930a6-db19-4e43-bd2b-75ef1d11c021\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.123877 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-0\") pod \"113930a6-db19-4e43-bd2b-75ef1d11c021\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.123901 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-2\") pod \"113930a6-db19-4e43-bd2b-75ef1d11c021\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.123953 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-web-config\") pod \"113930a6-db19-4e43-bd2b-75ef1d11c021\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.123988 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-1\") pod \"113930a6-db19-4e43-bd2b-75ef1d11c021\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.124006 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/113930a6-db19-4e43-bd2b-75ef1d11c021-tls-assets\") pod \"113930a6-db19-4e43-bd2b-75ef1d11c021\" (UID: \"113930a6-db19-4e43-bd2b-75ef1d11c021\") " Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.124928 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "113930a6-db19-4e43-bd2b-75ef1d11c021" (UID: "113930a6-db19-4e43-bd2b-75ef1d11c021"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.125196 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "113930a6-db19-4e43-bd2b-75ef1d11c021" (UID: "113930a6-db19-4e43-bd2b-75ef1d11c021"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.125292 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "113930a6-db19-4e43-bd2b-75ef1d11c021" (UID: "113930a6-db19-4e43-bd2b-75ef1d11c021"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.128892 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/113930a6-db19-4e43-bd2b-75ef1d11c021-kube-api-access-gdp9s" (OuterVolumeSpecName: "kube-api-access-gdp9s") pod "113930a6-db19-4e43-bd2b-75ef1d11c021" (UID: "113930a6-db19-4e43-bd2b-75ef1d11c021"). InnerVolumeSpecName "kube-api-access-gdp9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.131419 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/113930a6-db19-4e43-bd2b-75ef1d11c021-config-out" (OuterVolumeSpecName: "config-out") pod "113930a6-db19-4e43-bd2b-75ef1d11c021" (UID: "113930a6-db19-4e43-bd2b-75ef1d11c021"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.132216 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/113930a6-db19-4e43-bd2b-75ef1d11c021-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "113930a6-db19-4e43-bd2b-75ef1d11c021" (UID: "113930a6-db19-4e43-bd2b-75ef1d11c021"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.132451 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-config" (OuterVolumeSpecName: "config") pod "113930a6-db19-4e43-bd2b-75ef1d11c021" (UID: "113930a6-db19-4e43-bd2b-75ef1d11c021"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.133764 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "113930a6-db19-4e43-bd2b-75ef1d11c021" (UID: "113930a6-db19-4e43-bd2b-75ef1d11c021"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.149289 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "113930a6-db19-4e43-bd2b-75ef1d11c021" (UID: "113930a6-db19-4e43-bd2b-75ef1d11c021"). InnerVolumeSpecName "pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.170421 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-web-config" (OuterVolumeSpecName: "web-config") pod "113930a6-db19-4e43-bd2b-75ef1d11c021" (UID: "113930a6-db19-4e43-bd2b-75ef1d11c021"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.226064 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdp9s\" (UniqueName: \"kubernetes.io/projected/113930a6-db19-4e43-bd2b-75ef1d11c021-kube-api-access-gdp9s\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.226102 4797 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/113930a6-db19-4e43-bd2b-75ef1d11c021-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.226112 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.226122 4797 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.226132 4797 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.226141 4797 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.226150 4797 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/113930a6-db19-4e43-bd2b-75ef1d11c021-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.226160 4797 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/113930a6-db19-4e43-bd2b-75ef1d11c021-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.226168 4797 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/113930a6-db19-4e43-bd2b-75ef1d11c021-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.226202 4797 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\") on node \"crc\" " Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.245861 4797 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.246028 4797 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd") on node "crc" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.315047 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.113:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.327501 4797 reconciler_common.go:293] "Volume detached for volume \"pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.335792 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"113930a6-db19-4e43-bd2b-75ef1d11c021","Type":"ContainerDied","Data":"511f29667c83de2d4b714f2e976b94e8c362f7787df8b745e4f9df47ffc5fb8e"} Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.335852 4797 scope.go:117] "RemoveContainer" containerID="02c91ff603dd2fbbb6758814df1561fcaeef0c774f6185d2cf42c6200762db7a" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.335921 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.397703 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.402044 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.417621 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 11:25:16 crc kubenswrapper[4797]: E0216 11:25:16.418006 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="init-config-reloader" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418029 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="init-config-reloader" Feb 16 11:25:16 crc kubenswrapper[4797]: E0216 11:25:16.418050 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8455a08f-921f-44b1-a66b-b8ac256526d9" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418059 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="8455a08f-921f-44b1-a66b-b8ac256526d9" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: E0216 11:25:16.418072 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e31bcde1-c735-4e57-907d-2876334827d6" containerName="mariadb-database-create" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418081 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="e31bcde1-c735-4e57-907d-2876334827d6" containerName="mariadb-database-create" Feb 16 11:25:16 crc kubenswrapper[4797]: E0216 11:25:16.418095 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="prometheus" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418103 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="prometheus" Feb 16 11:25:16 crc kubenswrapper[4797]: E0216 11:25:16.418115 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa1b2e9-8e1b-4a90-aae6-49476b717d71" containerName="mariadb-database-create" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418122 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa1b2e9-8e1b-4a90-aae6-49476b717d71" containerName="mariadb-database-create" Feb 16 11:25:16 crc kubenswrapper[4797]: E0216 11:25:16.418133 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e" containerName="mariadb-database-create" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418142 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e" containerName="mariadb-database-create" Feb 16 11:25:16 crc kubenswrapper[4797]: E0216 11:25:16.418158 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d13f8337-bf62-4444-bb7b-fbb9699373d4" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418164 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d13f8337-bf62-4444-bb7b-fbb9699373d4" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: E0216 11:25:16.418171 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6650dd6b-74e9-407a-8690-6845e881427f" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418178 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="6650dd6b-74e9-407a-8690-6845e881427f" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: E0216 11:25:16.418187 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca5b2e47-863e-424d-9dd6-d8ed4b9e518e" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418193 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca5b2e47-863e-424d-9dd6-d8ed4b9e518e" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: E0216 11:25:16.418200 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e57501-f0cf-48c7-831e-d6782b7c1037" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418205 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e57501-f0cf-48c7-831e-d6782b7c1037" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: E0216 11:25:16.418211 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b289e9a1-2299-4c30-8a6a-ac125a3342ca" containerName="mariadb-database-create" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418218 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="b289e9a1-2299-4c30-8a6a-ac125a3342ca" containerName="mariadb-database-create" Feb 16 11:25:16 crc kubenswrapper[4797]: E0216 11:25:16.418233 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="thanos-sidecar" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418238 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="thanos-sidecar" Feb 16 11:25:16 crc kubenswrapper[4797]: E0216 11:25:16.418249 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="config-reloader" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418254 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="config-reloader" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418408 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="8455a08f-921f-44b1-a66b-b8ac256526d9" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418424 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa1b2e9-8e1b-4a90-aae6-49476b717d71" containerName="mariadb-database-create" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418433 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="b289e9a1-2299-4c30-8a6a-ac125a3342ca" containerName="mariadb-database-create" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418442 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="thanos-sidecar" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418451 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="e31bcde1-c735-4e57-907d-2876334827d6" containerName="mariadb-database-create" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418459 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="prometheus" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418468 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca5b2e47-863e-424d-9dd6-d8ed4b9e518e" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418476 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e" containerName="mariadb-database-create" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418485 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e57501-f0cf-48c7-831e-d6782b7c1037" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418496 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="6650dd6b-74e9-407a-8690-6845e881427f" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418503 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="d13f8337-bf62-4444-bb7b-fbb9699373d4" containerName="mariadb-account-create-update" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.418514 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" containerName="config-reloader" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.420265 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.422167 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.423540 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.423569 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.423645 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.423711 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.423925 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.424033 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-9ggq8" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.424112 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.429373 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.441383 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.529726 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/76a621c6-7221-46cd-8385-2c733893ccd0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.529785 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/76a621c6-7221-46cd-8385-2c733893ccd0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.529807 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.529858 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/76a621c6-7221-46cd-8385-2c733893ccd0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.529885 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-config\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.529926 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7l2j\" (UniqueName: \"kubernetes.io/projected/76a621c6-7221-46cd-8385-2c733893ccd0-kube-api-access-k7l2j\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.529951 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.529981 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/76a621c6-7221-46cd-8385-2c733893ccd0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.529998 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.530020 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/76a621c6-7221-46cd-8385-2c733893ccd0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.530042 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.530067 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.530106 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.631484 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/76a621c6-7221-46cd-8385-2c733893ccd0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.631793 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-config\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.631856 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7l2j\" (UniqueName: \"kubernetes.io/projected/76a621c6-7221-46cd-8385-2c733893ccd0-kube-api-access-k7l2j\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.631893 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.631936 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/76a621c6-7221-46cd-8385-2c733893ccd0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.631967 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.632001 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/76a621c6-7221-46cd-8385-2c733893ccd0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.632027 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.632057 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.632096 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.632137 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/76a621c6-7221-46cd-8385-2c733893ccd0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.632167 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.632184 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/76a621c6-7221-46cd-8385-2c733893ccd0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.635354 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/76a621c6-7221-46cd-8385-2c733893ccd0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.635919 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.635996 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/76a621c6-7221-46cd-8385-2c733893ccd0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.636031 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.636401 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/76a621c6-7221-46cd-8385-2c733893ccd0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.636771 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/76a621c6-7221-46cd-8385-2c733893ccd0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.636814 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.638647 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.639320 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.640998 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/76a621c6-7221-46cd-8385-2c733893ccd0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.647607 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/76a621c6-7221-46cd-8385-2c733893ccd0-config\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.648409 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.648553 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1f45b484f2970997eddc6379d7fc57939204465e8f811ff0d82af263170b706/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.661444 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7l2j\" (UniqueName: \"kubernetes.io/projected/76a621c6-7221-46cd-8385-2c733893ccd0-kube-api-access-k7l2j\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.690765 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df021ea5-a720-42c1-8e92-2b1fc76ffbcd\") pod \"prometheus-metric-storage-0\" (UID: \"76a621c6-7221-46cd-8385-2c733893ccd0\") " pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:16 crc kubenswrapper[4797]: I0216 11:25:16.746992 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 11:25:17 crc kubenswrapper[4797]: I0216 11:25:17.996918 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="113930a6-db19-4e43-bd2b-75ef1d11c021" path="/var/lib/kubelet/pods/113930a6-db19-4e43-bd2b-75ef1d11c021/volumes" Feb 16 11:25:23 crc kubenswrapper[4797]: I0216 11:25:23.270995 4797 scope.go:117] "RemoveContainer" containerID="ba83bf3ef96f3074fa46a9c5ce77e7912ec1fa6b24485db7cb8cec10de2f8696" Feb 16 11:25:23 crc kubenswrapper[4797]: I0216 11:25:23.296992 4797 scope.go:117] "RemoveContainer" containerID="16415f3ace1f92241ac1bc115d0fd48d6634facace715c2436c85e569e2a7a89" Feb 16 11:25:23 crc kubenswrapper[4797]: I0216 11:25:23.469197 4797 scope.go:117] "RemoveContainer" containerID="10c3892c9f010c9fb931f8a6dd0937caf2bc16fbe0e8a41de98eccbd07627fe5" Feb 16 11:25:23 crc kubenswrapper[4797]: I0216 11:25:23.823853 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 11:25:23 crc kubenswrapper[4797]: W0216 11:25:23.838371 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod76a621c6_7221_46cd_8385_2c733893ccd0.slice/crio-6300838b14197b4fbd35a75a2502c666ee7db6b0060488f849ad99305728dd52 WatchSource:0}: Error finding container 6300838b14197b4fbd35a75a2502c666ee7db6b0060488f849ad99305728dd52: Status 404 returned error can't find the container with id 6300838b14197b4fbd35a75a2502c666ee7db6b0060488f849ad99305728dd52 Feb 16 11:25:23 crc kubenswrapper[4797]: I0216 11:25:23.969030 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 11:25:24 crc kubenswrapper[4797]: I0216 11:25:24.420198 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zwkg2" event={"ID":"448a4a0f-a469-415f-8dcc-6223ee884c29","Type":"ContainerStarted","Data":"718500b1a588f795158e350d37d97282db7b75d658f9facf08b8ce3ed56b994b"} Feb 16 11:25:24 crc kubenswrapper[4797]: I0216 11:25:24.422359 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gbv4g" event={"ID":"54f56706-9d2d-4034-ab0d-ed5023bdde18","Type":"ContainerStarted","Data":"0acc9339135065381acc685e6fe8636570ad035642ea46d8da868a4fd5d9730d"} Feb 16 11:25:24 crc kubenswrapper[4797]: I0216 11:25:24.424332 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"8b18a6b7d4a101f8f157a59baa972a15745aaafeb31a428471d1aa3fc80c8754"} Feb 16 11:25:24 crc kubenswrapper[4797]: I0216 11:25:24.442767 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76a621c6-7221-46cd-8385-2c733893ccd0","Type":"ContainerStarted","Data":"6300838b14197b4fbd35a75a2502c666ee7db6b0060488f849ad99305728dd52"} Feb 16 11:25:24 crc kubenswrapper[4797]: I0216 11:25:24.461990 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-zwkg2" podStartSLOduration=2.579045863 podStartE2EDuration="19.46196765s" podCreationTimestamp="2026-02-16 11:25:05 +0000 UTC" firstStartedPulling="2026-02-16 11:25:06.431608853 +0000 UTC m=+1101.151793853" lastFinishedPulling="2026-02-16 11:25:23.31453066 +0000 UTC m=+1118.034715640" observedRunningTime="2026-02-16 11:25:24.454826235 +0000 UTC m=+1119.175011225" watchObservedRunningTime="2026-02-16 11:25:24.46196765 +0000 UTC m=+1119.182152640" Feb 16 11:25:24 crc kubenswrapper[4797]: I0216 11:25:24.479761 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-gbv4g" podStartSLOduration=3.121103922 podStartE2EDuration="17.479734566s" podCreationTimestamp="2026-02-16 11:25:07 +0000 UTC" firstStartedPulling="2026-02-16 11:25:08.941899064 +0000 UTC m=+1103.662084044" lastFinishedPulling="2026-02-16 11:25:23.300529708 +0000 UTC m=+1118.020714688" observedRunningTime="2026-02-16 11:25:24.471344417 +0000 UTC m=+1119.191529417" watchObservedRunningTime="2026-02-16 11:25:24.479734566 +0000 UTC m=+1119.199919546" Feb 16 11:25:25 crc kubenswrapper[4797]: I0216 11:25:25.455215 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"6f1be3f647f67824d6601eee1dc759d94774469423cb1347102fa9c79b6d17c1"} Feb 16 11:25:25 crc kubenswrapper[4797]: I0216 11:25:25.455797 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"085e4f8ffe7e8743bad01bea4ed3c06882c3d0ad81cd838fa6bdc97ae2e4a1f6"} Feb 16 11:25:26 crc kubenswrapper[4797]: I0216 11:25:26.462634 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"2ddecd8a884e9a7ca53f04523bb25728820bf190bcf0de24456c51a22daf843b"} Feb 16 11:25:26 crc kubenswrapper[4797]: I0216 11:25:26.462974 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"90ae6072d3dcee4828911d5f4dce8b86af9fe00954daa4293da9c0e908b40e66"} Feb 16 11:25:27 crc kubenswrapper[4797]: I0216 11:25:27.479569 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76a621c6-7221-46cd-8385-2c733893ccd0","Type":"ContainerStarted","Data":"3eb52c701b2d3b9ef2b4798b0762b58ba5d131dff8da25a3e50f572e8bc7216a"} Feb 16 11:25:27 crc kubenswrapper[4797]: I0216 11:25:27.483055 4797 generic.go:334] "Generic (PLEG): container finished" podID="54f56706-9d2d-4034-ab0d-ed5023bdde18" containerID="0acc9339135065381acc685e6fe8636570ad035642ea46d8da868a4fd5d9730d" exitCode=0 Feb 16 11:25:27 crc kubenswrapper[4797]: I0216 11:25:27.483143 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gbv4g" event={"ID":"54f56706-9d2d-4034-ab0d-ed5023bdde18","Type":"ContainerDied","Data":"0acc9339135065381acc685e6fe8636570ad035642ea46d8da868a4fd5d9730d"} Feb 16 11:25:27 crc kubenswrapper[4797]: I0216 11:25:27.486767 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"e24968b5853ac6fd529b68b9198ca19c77f1d1edc7bdda102ad6dfd4e66af835"} Feb 16 11:25:28 crc kubenswrapper[4797]: I0216 11:25:28.498320 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"ce99e7fcdea49bd422790eaf63df4ff2b0a2f82a3e0b4072189a3c5b4fefcc7b"} Feb 16 11:25:28 crc kubenswrapper[4797]: I0216 11:25:28.498666 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"5476e15745db52aab2c63803839d77bd1f475cffe8345b2852e60aaffe90c218"} Feb 16 11:25:28 crc kubenswrapper[4797]: I0216 11:25:28.498682 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"963069d616bb7ff1602bbcec24ede10c2c566c8b0ea3baa5ca64e60ebcc90a3d"} Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.018549 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gbv4g" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.171925 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f56706-9d2d-4034-ab0d-ed5023bdde18-combined-ca-bundle\") pod \"54f56706-9d2d-4034-ab0d-ed5023bdde18\" (UID: \"54f56706-9d2d-4034-ab0d-ed5023bdde18\") " Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.172199 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f56706-9d2d-4034-ab0d-ed5023bdde18-config-data\") pod \"54f56706-9d2d-4034-ab0d-ed5023bdde18\" (UID: \"54f56706-9d2d-4034-ab0d-ed5023bdde18\") " Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.172326 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn4b6\" (UniqueName: \"kubernetes.io/projected/54f56706-9d2d-4034-ab0d-ed5023bdde18-kube-api-access-rn4b6\") pod \"54f56706-9d2d-4034-ab0d-ed5023bdde18\" (UID: \"54f56706-9d2d-4034-ab0d-ed5023bdde18\") " Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.176790 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f56706-9d2d-4034-ab0d-ed5023bdde18-kube-api-access-rn4b6" (OuterVolumeSpecName: "kube-api-access-rn4b6") pod "54f56706-9d2d-4034-ab0d-ed5023bdde18" (UID: "54f56706-9d2d-4034-ab0d-ed5023bdde18"). InnerVolumeSpecName "kube-api-access-rn4b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.206239 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f56706-9d2d-4034-ab0d-ed5023bdde18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "54f56706-9d2d-4034-ab0d-ed5023bdde18" (UID: "54f56706-9d2d-4034-ab0d-ed5023bdde18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.217038 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f56706-9d2d-4034-ab0d-ed5023bdde18-config-data" (OuterVolumeSpecName: "config-data") pod "54f56706-9d2d-4034-ab0d-ed5023bdde18" (UID: "54f56706-9d2d-4034-ab0d-ed5023bdde18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.274600 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54f56706-9d2d-4034-ab0d-ed5023bdde18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.274640 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54f56706-9d2d-4034-ab0d-ed5023bdde18-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.274654 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn4b6\" (UniqueName: \"kubernetes.io/projected/54f56706-9d2d-4034-ab0d-ed5023bdde18-kube-api-access-rn4b6\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.516441 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-gbv4g" event={"ID":"54f56706-9d2d-4034-ab0d-ed5023bdde18","Type":"ContainerDied","Data":"41c8219fbfe6036e698050ac9c636999203dd2bd80721f22de9b99affa4fc69a"} Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.516482 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41c8219fbfe6036e698050ac9c636999203dd2bd80721f22de9b99affa4fc69a" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.516451 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-gbv4g" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.521328 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"63f1f0b27b2da32e9693b85f3b53882391ea53bd178664edd1b9b6b50bd5ac99"} Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.521365 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"66f2d208c6902e59d5f4db713f3d0d7c2654d433bdbeeb6e330a5c3c8df4a27b"} Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.793968 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-hl8dx"] Feb 16 11:25:29 crc kubenswrapper[4797]: E0216 11:25:29.797463 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54f56706-9d2d-4034-ab0d-ed5023bdde18" containerName="keystone-db-sync" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.797489 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="54f56706-9d2d-4034-ab0d-ed5023bdde18" containerName="keystone-db-sync" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.798040 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="54f56706-9d2d-4034-ab0d-ed5023bdde18" containerName="keystone-db-sync" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.806106 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.834895 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-hl8dx"] Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.918631 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cpv8d"] Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.920161 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.924189 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.924347 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-kpt68" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.924453 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.924551 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.925050 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.945627 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cpv8d"] Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.991651 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jkt2\" (UniqueName: \"kubernetes.io/projected/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-kube-api-access-2jkt2\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.991724 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.991749 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-dns-svc\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.991793 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:29 crc kubenswrapper[4797]: I0216 11:25:29.991868 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-config\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.028640 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-grph6"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.029999 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-grph6" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.037540 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7dchp" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.037684 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.037768 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.050960 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2dfc7"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.052199 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.060021 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-992vq" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.060699 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.064511 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-grph6"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.065198 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.092967 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-scripts\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.093009 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.093034 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-combined-ca-bundle\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.093060 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-credential-keys\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.093104 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-fernet-keys\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.093151 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-config\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.093191 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jkt2\" (UniqueName: \"kubernetes.io/projected/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-kube-api-access-2jkt2\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.093213 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6bqw\" (UniqueName: \"kubernetes.io/projected/855c2e17-ce4e-4541-a378-882268d22af4-kube-api-access-t6bqw\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.093233 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-config-data\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.093263 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.093284 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-dns-svc\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.094688 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-config\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.098631 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.100246 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2dfc7"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.116882 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.121439 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-dns-svc\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.171407 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jkt2\" (UniqueName: \"kubernetes.io/projected/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-kube-api-access-2jkt2\") pod \"dnsmasq-dns-f877ddd87-hl8dx\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.184881 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200174 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-combined-ca-bundle\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200232 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-credential-keys\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200288 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6bxz\" (UniqueName: \"kubernetes.io/projected/bbc2d12e-1b1b-43cc-baad-ff26e8423891-kube-api-access-k6bxz\") pod \"neutron-db-sync-grph6\" (UID: \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\") " pod="openstack/neutron-db-sync-grph6" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200310 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-db-sync-config-data\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200341 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-fernet-keys\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200361 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-combined-ca-bundle\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200392 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwnxg\" (UniqueName: \"kubernetes.io/projected/062948d0-fd09-4e11-904d-a346a430ee4f-kube-api-access-nwnxg\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200410 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bbc2d12e-1b1b-43cc-baad-ff26e8423891-config\") pod \"neutron-db-sync-grph6\" (UID: \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\") " pod="openstack/neutron-db-sync-grph6" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200455 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/062948d0-fd09-4e11-904d-a346a430ee4f-etc-machine-id\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200471 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-scripts\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200491 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc2d12e-1b1b-43cc-baad-ff26e8423891-combined-ca-bundle\") pod \"neutron-db-sync-grph6\" (UID: \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\") " pod="openstack/neutron-db-sync-grph6" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200519 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6bqw\" (UniqueName: \"kubernetes.io/projected/855c2e17-ce4e-4541-a378-882268d22af4-kube-api-access-t6bqw\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200545 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-config-data\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200627 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-config-data\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.200649 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-scripts\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.216030 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-combined-ca-bundle\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.235240 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-credential-keys\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.237766 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-scripts\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.245452 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-fernet-keys\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.255304 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-config-data\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.269346 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6bqw\" (UniqueName: \"kubernetes.io/projected/855c2e17-ce4e-4541-a378-882268d22af4-kube-api-access-t6bqw\") pod \"keystone-bootstrap-cpv8d\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.278225 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.293012 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-dhgrw"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.295913 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.300543 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.300892 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-9rfqw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.301127 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.301396 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.303012 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6bxz\" (UniqueName: \"kubernetes.io/projected/bbc2d12e-1b1b-43cc-baad-ff26e8423891-kube-api-access-k6bxz\") pod \"neutron-db-sync-grph6\" (UID: \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\") " pod="openstack/neutron-db-sync-grph6" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.303042 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-db-sync-config-data\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.303079 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-combined-ca-bundle\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.303108 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwnxg\" (UniqueName: \"kubernetes.io/projected/062948d0-fd09-4e11-904d-a346a430ee4f-kube-api-access-nwnxg\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.303123 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bbc2d12e-1b1b-43cc-baad-ff26e8423891-config\") pod \"neutron-db-sync-grph6\" (UID: \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\") " pod="openstack/neutron-db-sync-grph6" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.303167 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/062948d0-fd09-4e11-904d-a346a430ee4f-etc-machine-id\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.303187 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-scripts\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.303206 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc2d12e-1b1b-43cc-baad-ff26e8423891-combined-ca-bundle\") pod \"neutron-db-sync-grph6\" (UID: \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\") " pod="openstack/neutron-db-sync-grph6" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.303273 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-config-data\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.322358 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bbc2d12e-1b1b-43cc-baad-ff26e8423891-config\") pod \"neutron-db-sync-grph6\" (UID: \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\") " pod="openstack/neutron-db-sync-grph6" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.323014 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-scripts\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.329520 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-db-sync-config-data\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.330476 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/062948d0-fd09-4e11-904d-a346a430ee4f-etc-machine-id\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.349474 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-dhgrw"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.357614 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-config-data\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.363651 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-z8bpc"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.366285 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc2d12e-1b1b-43cc-baad-ff26e8423891-combined-ca-bundle\") pod \"neutron-db-sync-grph6\" (UID: \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\") " pod="openstack/neutron-db-sync-grph6" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.375150 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-z8bpc" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.376721 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-combined-ca-bundle\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.378740 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6bxz\" (UniqueName: \"kubernetes.io/projected/bbc2d12e-1b1b-43cc-baad-ff26e8423891-kube-api-access-k6bxz\") pod \"neutron-db-sync-grph6\" (UID: \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\") " pod="openstack/neutron-db-sync-grph6" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.379466 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwnxg\" (UniqueName: \"kubernetes.io/projected/062948d0-fd09-4e11-904d-a346a430ee4f-kube-api-access-nwnxg\") pod \"cinder-db-sync-2dfc7\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.386480 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-z8bpc"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.387915 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-grph6" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.402613 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.402900 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xvcw6" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.405189 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.406922 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fvxd\" (UniqueName: \"kubernetes.io/projected/895bed8d-c376-47ad-8fa6-3cf0f07399c0-kube-api-access-4fvxd\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.407104 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/895bed8d-c376-47ad-8fa6-3cf0f07399c0-scripts\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.407126 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/895bed8d-c376-47ad-8fa6-3cf0f07399c0-config-data\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.407171 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/895bed8d-c376-47ad-8fa6-3cf0f07399c0-certs\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.407193 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/895bed8d-c376-47ad-8fa6-3cf0f07399c0-combined-ca-bundle\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.411832 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.414168 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.422754 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.422931 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.430805 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-5kmr4"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.431919 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.443975 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rhw9s" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.444161 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.444265 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.444347 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5kmr4"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.470666 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.487357 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-hl8dx"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508603 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508656 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508680 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24fea779-c008-4fda-b2d0-e3201f7dfaed-db-sync-config-data\") pod \"barbican-db-sync-z8bpc\" (UID: \"24fea779-c008-4fda-b2d0-e3201f7dfaed\") " pod="openstack/barbican-db-sync-z8bpc" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508700 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-combined-ca-bundle\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508718 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fea779-c008-4fda-b2d0-e3201f7dfaed-combined-ca-bundle\") pod \"barbican-db-sync-z8bpc\" (UID: \"24fea779-c008-4fda-b2d0-e3201f7dfaed\") " pod="openstack/barbican-db-sync-z8bpc" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508740 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-scripts\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508759 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7pqs\" (UniqueName: \"kubernetes.io/projected/a0649e0a-7249-45bd-ad8f-6c7e61456322-kube-api-access-h7pqs\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508780 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0649e0a-7249-45bd-ad8f-6c7e61456322-run-httpd\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508820 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fvxd\" (UniqueName: \"kubernetes.io/projected/895bed8d-c376-47ad-8fa6-3cf0f07399c0-kube-api-access-4fvxd\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508834 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-scripts\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508861 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7hcr\" (UniqueName: \"kubernetes.io/projected/24fea779-c008-4fda-b2d0-e3201f7dfaed-kube-api-access-f7hcr\") pod \"barbican-db-sync-z8bpc\" (UID: \"24fea779-c008-4fda-b2d0-e3201f7dfaed\") " pod="openstack/barbican-db-sync-z8bpc" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508882 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jh6n\" (UniqueName: \"kubernetes.io/projected/35f90c62-8793-4bcc-8b06-9b0b710776d7-kube-api-access-5jh6n\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508919 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-config-data\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508938 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-config-data\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508957 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/895bed8d-c376-47ad-8fa6-3cf0f07399c0-scripts\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.508973 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/895bed8d-c376-47ad-8fa6-3cf0f07399c0-config-data\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.509006 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f90c62-8793-4bcc-8b06-9b0b710776d7-logs\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.509051 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0649e0a-7249-45bd-ad8f-6c7e61456322-log-httpd\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.509070 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/895bed8d-c376-47ad-8fa6-3cf0f07399c0-certs\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.509089 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/895bed8d-c376-47ad-8fa6-3cf0f07399c0-combined-ca-bundle\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.515040 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-8659p"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.517002 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.532436 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/895bed8d-c376-47ad-8fa6-3cf0f07399c0-certs\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.547521 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/895bed8d-c376-47ad-8fa6-3cf0f07399c0-config-data\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.547799 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/895bed8d-c376-47ad-8fa6-3cf0f07399c0-scripts\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.568046 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/895bed8d-c376-47ad-8fa6-3cf0f07399c0-combined-ca-bundle\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.619642 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fvxd\" (UniqueName: \"kubernetes.io/projected/895bed8d-c376-47ad-8fa6-3cf0f07399c0-kube-api-access-4fvxd\") pod \"cloudkitty-db-sync-dhgrw\" (UID: \"895bed8d-c376-47ad-8fa6-3cf0f07399c0\") " pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621431 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0649e0a-7249-45bd-ad8f-6c7e61456322-log-httpd\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621470 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621498 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621516 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621533 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24fea779-c008-4fda-b2d0-e3201f7dfaed-db-sync-config-data\") pod \"barbican-db-sync-z8bpc\" (UID: \"24fea779-c008-4fda-b2d0-e3201f7dfaed\") " pod="openstack/barbican-db-sync-z8bpc" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621548 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-combined-ca-bundle\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621566 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fea779-c008-4fda-b2d0-e3201f7dfaed-combined-ca-bundle\") pod \"barbican-db-sync-z8bpc\" (UID: \"24fea779-c008-4fda-b2d0-e3201f7dfaed\") " pod="openstack/barbican-db-sync-z8bpc" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621599 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-scripts\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621620 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7pqs\" (UniqueName: \"kubernetes.io/projected/a0649e0a-7249-45bd-ad8f-6c7e61456322-kube-api-access-h7pqs\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621641 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0649e0a-7249-45bd-ad8f-6c7e61456322-run-httpd\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621674 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-scripts\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621698 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7hcr\" (UniqueName: \"kubernetes.io/projected/24fea779-c008-4fda-b2d0-e3201f7dfaed-kube-api-access-f7hcr\") pod \"barbican-db-sync-z8bpc\" (UID: \"24fea779-c008-4fda-b2d0-e3201f7dfaed\") " pod="openstack/barbican-db-sync-z8bpc" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621716 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jh6n\" (UniqueName: \"kubernetes.io/projected/35f90c62-8793-4bcc-8b06-9b0b710776d7-kube-api-access-5jh6n\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621740 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wptlw\" (UniqueName: \"kubernetes.io/projected/851a7047-75f7-4329-8da0-64e3533569fc-kube-api-access-wptlw\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621759 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621782 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-config-data\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621802 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621819 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-config-data\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621842 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f90c62-8793-4bcc-8b06-9b0b710776d7-logs\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.621859 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-config\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.622302 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0649e0a-7249-45bd-ad8f-6c7e61456322-log-httpd\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.637570 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f90c62-8793-4bcc-8b06-9b0b710776d7-logs\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.638017 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0649e0a-7249-45bd-ad8f-6c7e61456322-run-httpd\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.661023 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-config-data\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.663081 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-8659p"] Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.665947 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-config-data\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.666202 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-combined-ca-bundle\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.688468 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-scripts\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.688753 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-scripts\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.689436 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.693709 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fea779-c008-4fda-b2d0-e3201f7dfaed-combined-ca-bundle\") pod \"barbican-db-sync-z8bpc\" (UID: \"24fea779-c008-4fda-b2d0-e3201f7dfaed\") " pod="openstack/barbican-db-sync-z8bpc" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.693766 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.695223 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jh6n\" (UniqueName: \"kubernetes.io/projected/35f90c62-8793-4bcc-8b06-9b0b710776d7-kube-api-access-5jh6n\") pod \"placement-db-sync-5kmr4\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.695724 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7hcr\" (UniqueName: \"kubernetes.io/projected/24fea779-c008-4fda-b2d0-e3201f7dfaed-kube-api-access-f7hcr\") pod \"barbican-db-sync-z8bpc\" (UID: \"24fea779-c008-4fda-b2d0-e3201f7dfaed\") " pod="openstack/barbican-db-sync-z8bpc" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.704903 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"0a765ca10b6e0ec9470199284a5435f73a227a839393bf6be305b7b0139956cc"} Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.704955 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"f041b2dc47386ac32816629caf34873eb28cf64577520663e179aeaca40ecac4"} Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.704967 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"54ad5af990e0090cc8f10057c11fa75a1e44084c43ae1154650ffde5e2aa9373"} Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.705332 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7pqs\" (UniqueName: \"kubernetes.io/projected/a0649e0a-7249-45bd-ad8f-6c7e61456322-kube-api-access-h7pqs\") pod \"ceilometer-0\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.711446 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24fea779-c008-4fda-b2d0-e3201f7dfaed-db-sync-config-data\") pod \"barbican-db-sync-z8bpc\" (UID: \"24fea779-c008-4fda-b2d0-e3201f7dfaed\") " pod="openstack/barbican-db-sync-z8bpc" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.730942 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wptlw\" (UniqueName: \"kubernetes.io/projected/851a7047-75f7-4329-8da0-64e3533569fc-kube-api-access-wptlw\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.730981 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.731014 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.731057 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-config\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.731147 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.732287 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-config\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.732907 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.733227 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.733490 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.760551 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wptlw\" (UniqueName: \"kubernetes.io/projected/851a7047-75f7-4329-8da0-64e3533569fc-kube-api-access-wptlw\") pod \"dnsmasq-dns-68dcc9cf6f-8659p\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.864845 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-dhgrw" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.893066 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-z8bpc" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.907023 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.955385 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5kmr4" Feb 16 11:25:30 crc kubenswrapper[4797]: I0216 11:25:30.984628 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.214180 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-hl8dx"] Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.365418 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cpv8d"] Feb 16 11:25:31 crc kubenswrapper[4797]: W0216 11:25:31.389662 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod855c2e17_ce4e_4541_a378_882268d22af4.slice/crio-ffddbc7720d70108d436e0aaf7f54029f016b57278c4ac0461225e216f6eebfa WatchSource:0}: Error finding container ffddbc7720d70108d436e0aaf7f54029f016b57278c4ac0461225e216f6eebfa: Status 404 returned error can't find the container with id ffddbc7720d70108d436e0aaf7f54029f016b57278c4ac0461225e216f6eebfa Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.437634 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2dfc7"] Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.579397 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-grph6"] Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.721560 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cpv8d" event={"ID":"855c2e17-ce4e-4541-a378-882268d22af4","Type":"ContainerStarted","Data":"ffddbc7720d70108d436e0aaf7f54029f016b57278c4ac0461225e216f6eebfa"} Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.723908 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2dfc7" event={"ID":"062948d0-fd09-4e11-904d-a346a430ee4f","Type":"ContainerStarted","Data":"646c22217dbbc8d47779de87ff8c2f61699173367eb80cc977ac316647c6cf26"} Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.726165 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-grph6" event={"ID":"bbc2d12e-1b1b-43cc-baad-ff26e8423891","Type":"ContainerStarted","Data":"694111c6652b7867b9dce8297b9c8121554eb44b7a5e93ebc1f3feac7d5d511a"} Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.737700 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.739762 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" event={"ID":"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945","Type":"ContainerStarted","Data":"d0facb7539bf3b10b11a47b236574335f774867b78905fbbf4a9148b241d8e49"} Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.739807 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" event={"ID":"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945","Type":"ContainerStarted","Data":"0cd06df7adbaeca016642301dadcef8e7d695b3307b7822d97abae0edcb91f55"} Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.739932 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" podUID="ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945" containerName="init" containerID="cri-o://d0facb7539bf3b10b11a47b236574335f774867b78905fbbf4a9148b241d8e49" gracePeriod=10 Feb 16 11:25:31 crc kubenswrapper[4797]: W0216 11:25:31.744842 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda0649e0a_7249_45bd_ad8f_6c7e61456322.slice/crio-63163e339aeaaab65320b953243392c886b4585797a8e1377a2249e3978ef011 WatchSource:0}: Error finding container 63163e339aeaaab65320b953243392c886b4585797a8e1377a2249e3978ef011: Status 404 returned error can't find the container with id 63163e339aeaaab65320b953243392c886b4585797a8e1377a2249e3978ef011 Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.771991 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"ff54ece757715a137b4e2ebae720b6f4b57f92c467b37d7c6b613a21276525f3"} Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.905747 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5kmr4"] Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.922804 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-dhgrw"] Feb 16 11:25:31 crc kubenswrapper[4797]: W0216 11:25:31.925247 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35f90c62_8793_4bcc_8b06_9b0b710776d7.slice/crio-251c487618195f7eea77ef6f9b8ea8c2350247ebbc532a78068b6d2fe87f11c3 WatchSource:0}: Error finding container 251c487618195f7eea77ef6f9b8ea8c2350247ebbc532a78068b6d2fe87f11c3: Status 404 returned error can't find the container with id 251c487618195f7eea77ef6f9b8ea8c2350247ebbc532a78068b6d2fe87f11c3 Feb 16 11:25:31 crc kubenswrapper[4797]: W0216 11:25:31.932721 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod895bed8d_c376_47ad_8fa6_3cf0f07399c0.slice/crio-c5d219fffc582d1aa9101d78cfe618338fa2dfce9f9c1cc2d819e68903e8963f WatchSource:0}: Error finding container c5d219fffc582d1aa9101d78cfe618338fa2dfce9f9c1cc2d819e68903e8963f: Status 404 returned error can't find the container with id c5d219fffc582d1aa9101d78cfe618338fa2dfce9f9c1cc2d819e68903e8963f Feb 16 11:25:31 crc kubenswrapper[4797]: I0216 11:25:31.942824 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-z8bpc"] Feb 16 11:25:31 crc kubenswrapper[4797]: W0216 11:25:31.972220 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24fea779_c008_4fda_b2d0_e3201f7dfaed.slice/crio-b1a2ca2734d2b3a901fc5d86d972c45991e1e0d9f568bf2040fd8936e1869340 WatchSource:0}: Error finding container b1a2ca2734d2b3a901fc5d86d972c45991e1e0d9f568bf2040fd8936e1869340: Status 404 returned error can't find the container with id b1a2ca2734d2b3a901fc5d86d972c45991e1e0d9f568bf2040fd8936e1869340 Feb 16 11:25:32 crc kubenswrapper[4797]: E0216 11:25:32.078437 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:25:32 crc kubenswrapper[4797]: E0216 11:25:32.078510 4797 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:25:32 crc kubenswrapper[4797]: E0216 11:25:32.078729 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fvxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dhgrw_openstack(895bed8d-c376-47ad-8fa6-3cf0f07399c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 11:25:32 crc kubenswrapper[4797]: E0216 11:25:32.079827 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.158740 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-8659p"] Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.379297 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.429854 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.488475 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-dns-svc\") pod \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.488627 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jkt2\" (UniqueName: \"kubernetes.io/projected/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-kube-api-access-2jkt2\") pod \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.488738 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-ovsdbserver-nb\") pod \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.488770 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-config\") pod \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.488822 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-ovsdbserver-sb\") pod \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\" (UID: \"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945\") " Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.506943 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-kube-api-access-2jkt2" (OuterVolumeSpecName: "kube-api-access-2jkt2") pod "ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945" (UID: "ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945"). InnerVolumeSpecName "kube-api-access-2jkt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.525533 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945" (UID: "ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.539516 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945" (UID: "ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.595455 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.595732 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jkt2\" (UniqueName: \"kubernetes.io/projected/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-kube-api-access-2jkt2\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.595745 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.606921 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945" (UID: "ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.634411 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-config" (OuterVolumeSpecName: "config") pod "ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945" (UID: "ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.698518 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.698559 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.843091 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9f443541-845c-4fdd-b6d1-08aba5c39667","Type":"ContainerStarted","Data":"16630d74cd0cb1bed18c14c3d04ca29adbec64371cf77518f19cdbcc5916169c"} Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.867509 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5kmr4" event={"ID":"35f90c62-8793-4bcc-8b06-9b0b710776d7","Type":"ContainerStarted","Data":"251c487618195f7eea77ef6f9b8ea8c2350247ebbc532a78068b6d2fe87f11c3"} Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.870646 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0649e0a-7249-45bd-ad8f-6c7e61456322","Type":"ContainerStarted","Data":"63163e339aeaaab65320b953243392c886b4585797a8e1377a2249e3978ef011"} Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.872938 4797 generic.go:334] "Generic (PLEG): container finished" podID="448a4a0f-a469-415f-8dcc-6223ee884c29" containerID="718500b1a588f795158e350d37d97282db7b75d658f9facf08b8ce3ed56b994b" exitCode=0 Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.872994 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zwkg2" event={"ID":"448a4a0f-a469-415f-8dcc-6223ee884c29","Type":"ContainerDied","Data":"718500b1a588f795158e350d37d97282db7b75d658f9facf08b8ce3ed56b994b"} Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.878942 4797 generic.go:334] "Generic (PLEG): container finished" podID="ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945" containerID="d0facb7539bf3b10b11a47b236574335f774867b78905fbbf4a9148b241d8e49" exitCode=0 Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.879012 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" event={"ID":"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945","Type":"ContainerDied","Data":"d0facb7539bf3b10b11a47b236574335f774867b78905fbbf4a9148b241d8e49"} Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.879043 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" event={"ID":"ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945","Type":"ContainerDied","Data":"0cd06df7adbaeca016642301dadcef8e7d695b3307b7822d97abae0edcb91f55"} Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.879060 4797 scope.go:117] "RemoveContainer" containerID="d0facb7539bf3b10b11a47b236574335f774867b78905fbbf4a9148b241d8e49" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.879222 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-hl8dx" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.894646 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-dhgrw" event={"ID":"895bed8d-c376-47ad-8fa6-3cf0f07399c0","Type":"ContainerStarted","Data":"c5d219fffc582d1aa9101d78cfe618338fa2dfce9f9c1cc2d819e68903e8963f"} Feb 16 11:25:32 crc kubenswrapper[4797]: E0216 11:25:32.896397 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.908614 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-grph6" event={"ID":"bbc2d12e-1b1b-43cc-baad-ff26e8423891","Type":"ContainerStarted","Data":"3a65263a2a7e396e956e7a22682c657dafad794a59d372110ff6b19e5b2691aa"} Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.940423 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=45.96424417 podStartE2EDuration="50.940404645s" podCreationTimestamp="2026-02-16 11:24:42 +0000 UTC" firstStartedPulling="2026-02-16 11:25:23.972893029 +0000 UTC m=+1118.693078009" lastFinishedPulling="2026-02-16 11:25:28.949053494 +0000 UTC m=+1123.669238484" observedRunningTime="2026-02-16 11:25:32.932126058 +0000 UTC m=+1127.652311038" watchObservedRunningTime="2026-02-16 11:25:32.940404645 +0000 UTC m=+1127.660589625" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.950413 4797 generic.go:334] "Generic (PLEG): container finished" podID="851a7047-75f7-4329-8da0-64e3533569fc" containerID="94314368229c4a01da90b52f0d2432a15928b2ddf05dbefd045bdac7b4b4d26d" exitCode=0 Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.950652 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" event={"ID":"851a7047-75f7-4329-8da0-64e3533569fc","Type":"ContainerDied","Data":"94314368229c4a01da90b52f0d2432a15928b2ddf05dbefd045bdac7b4b4d26d"} Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.950714 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" event={"ID":"851a7047-75f7-4329-8da0-64e3533569fc","Type":"ContainerStarted","Data":"8aa0901b8f4d9ca0884bcd6d537aae24a22f42ef584692267c810a28e4cd1c7f"} Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.969827 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-z8bpc" event={"ID":"24fea779-c008-4fda-b2d0-e3201f7dfaed","Type":"ContainerStarted","Data":"b1a2ca2734d2b3a901fc5d86d972c45991e1e0d9f568bf2040fd8936e1869340"} Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.981349 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-grph6" podStartSLOduration=3.981328034 podStartE2EDuration="3.981328034s" podCreationTimestamp="2026-02-16 11:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:25:32.948982499 +0000 UTC m=+1127.669167489" watchObservedRunningTime="2026-02-16 11:25:32.981328034 +0000 UTC m=+1127.701513014" Feb 16 11:25:32 crc kubenswrapper[4797]: I0216 11:25:32.987614 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cpv8d" event={"ID":"855c2e17-ce4e-4541-a378-882268d22af4","Type":"ContainerStarted","Data":"a04639c9b7a5ebdecd4efe75e843728a89325d33c478251558b69d63ebc7114b"} Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.045317 4797 scope.go:117] "RemoveContainer" containerID="d0facb7539bf3b10b11a47b236574335f774867b78905fbbf4a9148b241d8e49" Feb 16 11:25:33 crc kubenswrapper[4797]: E0216 11:25:33.046239 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0facb7539bf3b10b11a47b236574335f774867b78905fbbf4a9148b241d8e49\": container with ID starting with d0facb7539bf3b10b11a47b236574335f774867b78905fbbf4a9148b241d8e49 not found: ID does not exist" containerID="d0facb7539bf3b10b11a47b236574335f774867b78905fbbf4a9148b241d8e49" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.046277 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0facb7539bf3b10b11a47b236574335f774867b78905fbbf4a9148b241d8e49"} err="failed to get container status \"d0facb7539bf3b10b11a47b236574335f774867b78905fbbf4a9148b241d8e49\": rpc error: code = NotFound desc = could not find container \"d0facb7539bf3b10b11a47b236574335f774867b78905fbbf4a9148b241d8e49\": container with ID starting with d0facb7539bf3b10b11a47b236574335f774867b78905fbbf4a9148b241d8e49 not found: ID does not exist" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.138429 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-hl8dx"] Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.155485 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-hl8dx"] Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.165598 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cpv8d" podStartSLOduration=4.16556707 podStartE2EDuration="4.16556707s" podCreationTimestamp="2026-02-16 11:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:25:33.109752014 +0000 UTC m=+1127.829936984" watchObservedRunningTime="2026-02-16 11:25:33.16556707 +0000 UTC m=+1127.885752050" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.260307 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-8659p"] Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.314933 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-9w2fx"] Feb 16 11:25:33 crc kubenswrapper[4797]: E0216 11:25:33.315446 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945" containerName="init" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.315470 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945" containerName="init" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.315688 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945" containerName="init" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.326842 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-9w2fx"] Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.326975 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.329473 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.443803 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.444014 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.444236 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.444359 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-config\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.444412 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l24t\" (UniqueName: \"kubernetes.io/projected/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-kube-api-access-5l24t\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.444503 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.546673 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.546815 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.546836 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.546887 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-config\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.546906 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l24t\" (UniqueName: \"kubernetes.io/projected/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-kube-api-access-5l24t\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.546933 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.547762 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.547798 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.548387 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-config\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.548738 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.549182 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.591437 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l24t\" (UniqueName: \"kubernetes.io/projected/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-kube-api-access-5l24t\") pod \"dnsmasq-dns-58dd9ff6bc-9w2fx\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:33 crc kubenswrapper[4797]: I0216 11:25:33.662703 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.014339 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" podUID="851a7047-75f7-4329-8da0-64e3533569fc" containerName="dnsmasq-dns" containerID="cri-o://d0097f474c564dd3c419a0c19ef5aaa0ac61ab7244b6ac85b05f630cb284fc53" gracePeriod=10 Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.022415 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945" path="/var/lib/kubelet/pods/ab4e8c96-bca7-4eea-9ff2-a9fbd1f01945/volumes" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.023294 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.023337 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" event={"ID":"851a7047-75f7-4329-8da0-64e3533569fc","Type":"ContainerStarted","Data":"d0097f474c564dd3c419a0c19ef5aaa0ac61ab7244b6ac85b05f630cb284fc53"} Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.030057 4797 generic.go:334] "Generic (PLEG): container finished" podID="76a621c6-7221-46cd-8385-2c733893ccd0" containerID="3eb52c701b2d3b9ef2b4798b0762b58ba5d131dff8da25a3e50f572e8bc7216a" exitCode=0 Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.030115 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76a621c6-7221-46cd-8385-2c733893ccd0","Type":"ContainerDied","Data":"3eb52c701b2d3b9ef2b4798b0762b58ba5d131dff8da25a3e50f572e8bc7216a"} Feb 16 11:25:34 crc kubenswrapper[4797]: E0216 11:25:34.049641 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.057526 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" podStartSLOduration=4.057500886 podStartE2EDuration="4.057500886s" podCreationTimestamp="2026-02-16 11:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:25:34.041759255 +0000 UTC m=+1128.761944255" watchObservedRunningTime="2026-02-16 11:25:34.057500886 +0000 UTC m=+1128.777685876" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.273289 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-9w2fx"] Feb 16 11:25:34 crc kubenswrapper[4797]: W0216 11:25:34.291367 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93cd1d9c_b6c4_436d_8d64_0b00079f5f42.slice/crio-3dbc0bd9b26d6d7eab11511c81163fec1454632fac68ab39672667f1b61ba178 WatchSource:0}: Error finding container 3dbc0bd9b26d6d7eab11511c81163fec1454632fac68ab39672667f1b61ba178: Status 404 returned error can't find the container with id 3dbc0bd9b26d6d7eab11511c81163fec1454632fac68ab39672667f1b61ba178 Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.814443 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.892307 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-config-data\") pod \"448a4a0f-a469-415f-8dcc-6223ee884c29\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.892785 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-combined-ca-bundle\") pod \"448a4a0f-a469-415f-8dcc-6223ee884c29\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.892874 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfwrf\" (UniqueName: \"kubernetes.io/projected/448a4a0f-a469-415f-8dcc-6223ee884c29-kube-api-access-vfwrf\") pod \"448a4a0f-a469-415f-8dcc-6223ee884c29\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.892916 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-db-sync-config-data\") pod \"448a4a0f-a469-415f-8dcc-6223ee884c29\" (UID: \"448a4a0f-a469-415f-8dcc-6223ee884c29\") " Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.932911 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/448a4a0f-a469-415f-8dcc-6223ee884c29-kube-api-access-vfwrf" (OuterVolumeSpecName: "kube-api-access-vfwrf") pod "448a4a0f-a469-415f-8dcc-6223ee884c29" (UID: "448a4a0f-a469-415f-8dcc-6223ee884c29"). InnerVolumeSpecName "kube-api-access-vfwrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.933042 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "448a4a0f-a469-415f-8dcc-6223ee884c29" (UID: "448a4a0f-a469-415f-8dcc-6223ee884c29"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.951720 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.963194 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "448a4a0f-a469-415f-8dcc-6223ee884c29" (UID: "448a4a0f-a469-415f-8dcc-6223ee884c29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.987691 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-config-data" (OuterVolumeSpecName: "config-data") pod "448a4a0f-a469-415f-8dcc-6223ee884c29" (UID: "448a4a0f-a469-415f-8dcc-6223ee884c29"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.995703 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.995730 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.995741 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfwrf\" (UniqueName: \"kubernetes.io/projected/448a4a0f-a469-415f-8dcc-6223ee884c29-kube-api-access-vfwrf\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:34 crc kubenswrapper[4797]: I0216 11:25:34.995752 4797 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/448a4a0f-a469-415f-8dcc-6223ee884c29-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.083882 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-zwkg2" event={"ID":"448a4a0f-a469-415f-8dcc-6223ee884c29","Type":"ContainerDied","Data":"0d709728ee8ee7ea07112c6b80d3950d0a10b3e98da5dca082e02acce8a3fcdb"} Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.083917 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d709728ee8ee7ea07112c6b80d3950d0a10b3e98da5dca082e02acce8a3fcdb" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.083972 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-zwkg2" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.088275 4797 generic.go:334] "Generic (PLEG): container finished" podID="851a7047-75f7-4329-8da0-64e3533569fc" containerID="d0097f474c564dd3c419a0c19ef5aaa0ac61ab7244b6ac85b05f630cb284fc53" exitCode=0 Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.088335 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" event={"ID":"851a7047-75f7-4329-8da0-64e3533569fc","Type":"ContainerDied","Data":"d0097f474c564dd3c419a0c19ef5aaa0ac61ab7244b6ac85b05f630cb284fc53"} Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.088362 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" event={"ID":"851a7047-75f7-4329-8da0-64e3533569fc","Type":"ContainerDied","Data":"8aa0901b8f4d9ca0884bcd6d537aae24a22f42ef584692267c810a28e4cd1c7f"} Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.088378 4797 scope.go:117] "RemoveContainer" containerID="d0097f474c564dd3c419a0c19ef5aaa0ac61ab7244b6ac85b05f630cb284fc53" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.088492 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-8659p" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.098263 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-ovsdbserver-nb\") pod \"851a7047-75f7-4329-8da0-64e3533569fc\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.098408 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-config\") pod \"851a7047-75f7-4329-8da0-64e3533569fc\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.098431 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wptlw\" (UniqueName: \"kubernetes.io/projected/851a7047-75f7-4329-8da0-64e3533569fc-kube-api-access-wptlw\") pod \"851a7047-75f7-4329-8da0-64e3533569fc\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.098483 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-ovsdbserver-sb\") pod \"851a7047-75f7-4329-8da0-64e3533569fc\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.098614 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-dns-svc\") pod \"851a7047-75f7-4329-8da0-64e3533569fc\" (UID: \"851a7047-75f7-4329-8da0-64e3533569fc\") " Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.101033 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76a621c6-7221-46cd-8385-2c733893ccd0","Type":"ContainerStarted","Data":"250dcfd172588e0002d58260e35e6994c99d93052d9a71355d45b23e5ce35635"} Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.104880 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851a7047-75f7-4329-8da0-64e3533569fc-kube-api-access-wptlw" (OuterVolumeSpecName: "kube-api-access-wptlw") pod "851a7047-75f7-4329-8da0-64e3533569fc" (UID: "851a7047-75f7-4329-8da0-64e3533569fc"). InnerVolumeSpecName "kube-api-access-wptlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.114286 4797 generic.go:334] "Generic (PLEG): container finished" podID="93cd1d9c-b6c4-436d-8d64-0b00079f5f42" containerID="c48342649c1e8f8bcd1bd0ff4788c8187c6371859ad33df019ffbe39798c0ed8" exitCode=0 Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.114331 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" event={"ID":"93cd1d9c-b6c4-436d-8d64-0b00079f5f42","Type":"ContainerDied","Data":"c48342649c1e8f8bcd1bd0ff4788c8187c6371859ad33df019ffbe39798c0ed8"} Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.114357 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" event={"ID":"93cd1d9c-b6c4-436d-8d64-0b00079f5f42","Type":"ContainerStarted","Data":"3dbc0bd9b26d6d7eab11511c81163fec1454632fac68ab39672667f1b61ba178"} Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.133212 4797 scope.go:117] "RemoveContainer" containerID="94314368229c4a01da90b52f0d2432a15928b2ddf05dbefd045bdac7b4b4d26d" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.182214 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "851a7047-75f7-4329-8da0-64e3533569fc" (UID: "851a7047-75f7-4329-8da0-64e3533569fc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.183012 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "851a7047-75f7-4329-8da0-64e3533569fc" (UID: "851a7047-75f7-4329-8da0-64e3533569fc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.195537 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-config" (OuterVolumeSpecName: "config") pod "851a7047-75f7-4329-8da0-64e3533569fc" (UID: "851a7047-75f7-4329-8da0-64e3533569fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.201725 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.201788 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.201800 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wptlw\" (UniqueName: \"kubernetes.io/projected/851a7047-75f7-4329-8da0-64e3533569fc-kube-api-access-wptlw\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.201810 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.212278 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "851a7047-75f7-4329-8da0-64e3533569fc" (UID: "851a7047-75f7-4329-8da0-64e3533569fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.304392 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851a7047-75f7-4329-8da0-64e3533569fc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.460142 4797 scope.go:117] "RemoveContainer" containerID="d0097f474c564dd3c419a0c19ef5aaa0ac61ab7244b6ac85b05f630cb284fc53" Feb 16 11:25:35 crc kubenswrapper[4797]: E0216 11:25:35.462961 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0097f474c564dd3c419a0c19ef5aaa0ac61ab7244b6ac85b05f630cb284fc53\": container with ID starting with d0097f474c564dd3c419a0c19ef5aaa0ac61ab7244b6ac85b05f630cb284fc53 not found: ID does not exist" containerID="d0097f474c564dd3c419a0c19ef5aaa0ac61ab7244b6ac85b05f630cb284fc53" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.463011 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0097f474c564dd3c419a0c19ef5aaa0ac61ab7244b6ac85b05f630cb284fc53"} err="failed to get container status \"d0097f474c564dd3c419a0c19ef5aaa0ac61ab7244b6ac85b05f630cb284fc53\": rpc error: code = NotFound desc = could not find container \"d0097f474c564dd3c419a0c19ef5aaa0ac61ab7244b6ac85b05f630cb284fc53\": container with ID starting with d0097f474c564dd3c419a0c19ef5aaa0ac61ab7244b6ac85b05f630cb284fc53 not found: ID does not exist" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.463034 4797 scope.go:117] "RemoveContainer" containerID="94314368229c4a01da90b52f0d2432a15928b2ddf05dbefd045bdac7b4b4d26d" Feb 16 11:25:35 crc kubenswrapper[4797]: E0216 11:25:35.470713 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94314368229c4a01da90b52f0d2432a15928b2ddf05dbefd045bdac7b4b4d26d\": container with ID starting with 94314368229c4a01da90b52f0d2432a15928b2ddf05dbefd045bdac7b4b4d26d not found: ID does not exist" containerID="94314368229c4a01da90b52f0d2432a15928b2ddf05dbefd045bdac7b4b4d26d" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.470793 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94314368229c4a01da90b52f0d2432a15928b2ddf05dbefd045bdac7b4b4d26d"} err="failed to get container status \"94314368229c4a01da90b52f0d2432a15928b2ddf05dbefd045bdac7b4b4d26d\": rpc error: code = NotFound desc = could not find container \"94314368229c4a01da90b52f0d2432a15928b2ddf05dbefd045bdac7b4b4d26d\": container with ID starting with 94314368229c4a01da90b52f0d2432a15928b2ddf05dbefd045bdac7b4b4d26d not found: ID does not exist" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.480524 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-8659p"] Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.488080 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-8659p"] Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.565788 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-9w2fx"] Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.584654 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mnvc5"] Feb 16 11:25:35 crc kubenswrapper[4797]: E0216 11:25:35.587130 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851a7047-75f7-4329-8da0-64e3533569fc" containerName="init" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.587157 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="851a7047-75f7-4329-8da0-64e3533569fc" containerName="init" Feb 16 11:25:35 crc kubenswrapper[4797]: E0216 11:25:35.587172 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448a4a0f-a469-415f-8dcc-6223ee884c29" containerName="glance-db-sync" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.587179 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="448a4a0f-a469-415f-8dcc-6223ee884c29" containerName="glance-db-sync" Feb 16 11:25:35 crc kubenswrapper[4797]: E0216 11:25:35.587203 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851a7047-75f7-4329-8da0-64e3533569fc" containerName="dnsmasq-dns" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.587208 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="851a7047-75f7-4329-8da0-64e3533569fc" containerName="dnsmasq-dns" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.587379 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="448a4a0f-a469-415f-8dcc-6223ee884c29" containerName="glance-db-sync" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.587398 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="851a7047-75f7-4329-8da0-64e3533569fc" containerName="dnsmasq-dns" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.588737 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.622731 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mnvc5"] Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.716276 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.716324 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-config\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.716415 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.716445 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.716483 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.716525 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4jvl\" (UniqueName: \"kubernetes.io/projected/ac2e5b3c-86cc-42ce-bc7a-630034757e55-kube-api-access-n4jvl\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.817922 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.818004 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.818058 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4jvl\" (UniqueName: \"kubernetes.io/projected/ac2e5b3c-86cc-42ce-bc7a-630034757e55-kube-api-access-n4jvl\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.818087 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.818108 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-config\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.818221 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.819556 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.819661 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.820097 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.820673 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-config\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.821965 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.841872 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4jvl\" (UniqueName: \"kubernetes.io/projected/ac2e5b3c-86cc-42ce-bc7a-630034757e55-kube-api-access-n4jvl\") pod \"dnsmasq-dns-785d8bcb8c-mnvc5\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.913872 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:35 crc kubenswrapper[4797]: I0216 11:25:35.996045 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="851a7047-75f7-4329-8da0-64e3533569fc" path="/var/lib/kubelet/pods/851a7047-75f7-4329-8da0-64e3533569fc/volumes" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.157624 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" event={"ID":"93cd1d9c-b6c4-436d-8d64-0b00079f5f42","Type":"ContainerStarted","Data":"572f70926abfbe1c798b8f25cd435eacffea4e6d532a697199920f08ee09186f"} Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.157911 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.157969 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" podUID="93cd1d9c-b6c4-436d-8d64-0b00079f5f42" containerName="dnsmasq-dns" containerID="cri-o://572f70926abfbe1c798b8f25cd435eacffea4e6d532a697199920f08ee09186f" gracePeriod=10 Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.202045 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" podStartSLOduration=3.202019865 podStartE2EDuration="3.202019865s" podCreationTimestamp="2026-02-16 11:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:25:36.196883634 +0000 UTC m=+1130.917068624" watchObservedRunningTime="2026-02-16 11:25:36.202019865 +0000 UTC m=+1130.922204855" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.412198 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.414272 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.422018 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.422312 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.422636 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2cxxr" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.437978 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.506048 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mnvc5"] Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.532887 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-scripts\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.532946 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f264-2ed7-43a2-8179-53917835fc77-logs\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.532984 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5knw\" (UniqueName: \"kubernetes.io/projected/d1d5f264-2ed7-43a2-8179-53917835fc77-kube-api-access-p5knw\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.533022 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-config-data\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.533058 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.533266 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.533340 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f264-2ed7-43a2-8179-53917835fc77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.635189 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-config-data\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.635263 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.635366 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.635407 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f264-2ed7-43a2-8179-53917835fc77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.635479 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-scripts\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.635507 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f264-2ed7-43a2-8179-53917835fc77-logs\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.635535 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5knw\" (UniqueName: \"kubernetes.io/projected/d1d5f264-2ed7-43a2-8179-53917835fc77-kube-api-access-p5knw\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.637333 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f264-2ed7-43a2-8179-53917835fc77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.639905 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f264-2ed7-43a2-8179-53917835fc77-logs\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.640487 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.640538 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-scripts\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.642330 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.642378 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f8da30fb4c83d9deb7f001a58f922a696263527e837af7c4c51b5beb3f892969/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.660365 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5knw\" (UniqueName: \"kubernetes.io/projected/d1d5f264-2ed7-43a2-8179-53917835fc77-kube-api-access-p5knw\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.681452 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-config-data\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.768390 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.770997 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.773009 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.803317 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.845039 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.845097 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.845165 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0965af42-84ad-45d8-9516-4d835e8e2242-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.845253 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtprl\" (UniqueName: \"kubernetes.io/projected/0965af42-84ad-45d8-9516-4d835e8e2242-kube-api-access-vtprl\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.845314 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.845368 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.845430 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0965af42-84ad-45d8-9516-4d835e8e2242-logs\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.947744 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtprl\" (UniqueName: \"kubernetes.io/projected/0965af42-84ad-45d8-9516-4d835e8e2242-kube-api-access-vtprl\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.947842 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.947900 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.947964 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0965af42-84ad-45d8-9516-4d835e8e2242-logs\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.948069 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.948099 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.948132 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0965af42-84ad-45d8-9516-4d835e8e2242-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.950050 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0965af42-84ad-45d8-9516-4d835e8e2242-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.950364 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0965af42-84ad-45d8-9516-4d835e8e2242-logs\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.958409 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.959440 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.967354 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.986270 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.986323 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/27e379195fe32f84d1c9f17b5c57278f71c5a261b9f037c02b6f4c2041aa5cbc/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 11:25:36 crc kubenswrapper[4797]: I0216 11:25:36.986495 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtprl\" (UniqueName: \"kubernetes.io/projected/0965af42-84ad-45d8-9516-4d835e8e2242-kube-api-access-vtprl\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.033467 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"glance-default-external-api-0\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " pod="openstack/glance-default-external-api-0" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.066062 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.125660 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.163309 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5l24t\" (UniqueName: \"kubernetes.io/projected/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-kube-api-access-5l24t\") pod \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.163404 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-ovsdbserver-sb\") pod \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.163449 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-dns-swift-storage-0\") pod \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.163513 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-dns-svc\") pod \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.163542 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-ovsdbserver-nb\") pod \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.163617 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-config\") pod \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\" (UID: \"93cd1d9c-b6c4-436d-8d64-0b00079f5f42\") " Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.178104 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-kube-api-access-5l24t" (OuterVolumeSpecName: "kube-api-access-5l24t") pod "93cd1d9c-b6c4-436d-8d64-0b00079f5f42" (UID: "93cd1d9c-b6c4-436d-8d64-0b00079f5f42"). InnerVolumeSpecName "kube-api-access-5l24t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.191659 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"glance-default-internal-api-0\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.192841 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" event={"ID":"ac2e5b3c-86cc-42ce-bc7a-630034757e55","Type":"ContainerStarted","Data":"ec125afb7f4b513bc05e5f566bfca59ab4c2ff7351bcd970948cb183fa0912d6"} Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.202076 4797 generic.go:334] "Generic (PLEG): container finished" podID="93cd1d9c-b6c4-436d-8d64-0b00079f5f42" containerID="572f70926abfbe1c798b8f25cd435eacffea4e6d532a697199920f08ee09186f" exitCode=0 Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.202160 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" event={"ID":"93cd1d9c-b6c4-436d-8d64-0b00079f5f42","Type":"ContainerDied","Data":"572f70926abfbe1c798b8f25cd435eacffea4e6d532a697199920f08ee09186f"} Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.202215 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" event={"ID":"93cd1d9c-b6c4-436d-8d64-0b00079f5f42","Type":"ContainerDied","Data":"3dbc0bd9b26d6d7eab11511c81163fec1454632fac68ab39672667f1b61ba178"} Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.202235 4797 scope.go:117] "RemoveContainer" containerID="572f70926abfbe1c798b8f25cd435eacffea4e6d532a697199920f08ee09186f" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.202681 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-9w2fx" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.210875 4797 generic.go:334] "Generic (PLEG): container finished" podID="855c2e17-ce4e-4541-a378-882268d22af4" containerID="a04639c9b7a5ebdecd4efe75e843728a89325d33c478251558b69d63ebc7114b" exitCode=0 Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.210931 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cpv8d" event={"ID":"855c2e17-ce4e-4541-a378-882268d22af4","Type":"ContainerDied","Data":"a04639c9b7a5ebdecd4efe75e843728a89325d33c478251558b69d63ebc7114b"} Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.266029 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5l24t\" (UniqueName: \"kubernetes.io/projected/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-kube-api-access-5l24t\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.428969 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.668611 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.688430 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "93cd1d9c-b6c4-436d-8d64-0b00079f5f42" (UID: "93cd1d9c-b6c4-436d-8d64-0b00079f5f42"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.743895 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93cd1d9c-b6c4-436d-8d64-0b00079f5f42" (UID: "93cd1d9c-b6c4-436d-8d64-0b00079f5f42"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.775730 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.775766 4797 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.830491 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-config" (OuterVolumeSpecName: "config") pod "93cd1d9c-b6c4-436d-8d64-0b00079f5f42" (UID: "93cd1d9c-b6c4-436d-8d64-0b00079f5f42"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.836101 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93cd1d9c-b6c4-436d-8d64-0b00079f5f42" (UID: "93cd1d9c-b6c4-436d-8d64-0b00079f5f42"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.849432 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93cd1d9c-b6c4-436d-8d64-0b00079f5f42" (UID: "93cd1d9c-b6c4-436d-8d64-0b00079f5f42"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.877272 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.877330 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:37 crc kubenswrapper[4797]: I0216 11:25:37.877341 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93cd1d9c-b6c4-436d-8d64-0b00079f5f42-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:38 crc kubenswrapper[4797]: I0216 11:25:38.130794 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-9w2fx"] Feb 16 11:25:38 crc kubenswrapper[4797]: I0216 11:25:38.140623 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-9w2fx"] Feb 16 11:25:38 crc kubenswrapper[4797]: I0216 11:25:38.235727 4797 generic.go:334] "Generic (PLEG): container finished" podID="ac2e5b3c-86cc-42ce-bc7a-630034757e55" containerID="e608ba0b3f1cfc5c677398fed0d1b63d71801db3eaa97665bb8db8529d867233" exitCode=0 Feb 16 11:25:38 crc kubenswrapper[4797]: I0216 11:25:38.235884 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" event={"ID":"ac2e5b3c-86cc-42ce-bc7a-630034757e55","Type":"ContainerDied","Data":"e608ba0b3f1cfc5c677398fed0d1b63d71801db3eaa97665bb8db8529d867233"} Feb 16 11:25:38 crc kubenswrapper[4797]: I0216 11:25:38.243746 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76a621c6-7221-46cd-8385-2c733893ccd0","Type":"ContainerStarted","Data":"a581f8ce2c94dcc1ed138778595fed93cbb31c5cb6ef8e5dad9dc7292e475743"} Feb 16 11:25:39 crc kubenswrapper[4797]: I0216 11:25:39.940078 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:25:39 crc kubenswrapper[4797]: I0216 11:25:39.998211 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93cd1d9c-b6c4-436d-8d64-0b00079f5f42" path="/var/lib/kubelet/pods/93cd1d9c-b6c4-436d-8d64-0b00079f5f42/volumes" Feb 16 11:25:40 crc kubenswrapper[4797]: I0216 11:25:40.028876 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.270606 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.279517 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cpv8d" event={"ID":"855c2e17-ce4e-4541-a378-882268d22af4","Type":"ContainerDied","Data":"ffddbc7720d70108d436e0aaf7f54029f016b57278c4ac0461225e216f6eebfa"} Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.279557 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffddbc7720d70108d436e0aaf7f54029f016b57278c4ac0461225e216f6eebfa" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.279615 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cpv8d" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.458005 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-fernet-keys\") pod \"855c2e17-ce4e-4541-a378-882268d22af4\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.458072 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-combined-ca-bundle\") pod \"855c2e17-ce4e-4541-a378-882268d22af4\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.458218 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-config-data\") pod \"855c2e17-ce4e-4541-a378-882268d22af4\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.458288 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6bqw\" (UniqueName: \"kubernetes.io/projected/855c2e17-ce4e-4541-a378-882268d22af4-kube-api-access-t6bqw\") pod \"855c2e17-ce4e-4541-a378-882268d22af4\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.458372 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-scripts\") pod \"855c2e17-ce4e-4541-a378-882268d22af4\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.458414 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-credential-keys\") pod \"855c2e17-ce4e-4541-a378-882268d22af4\" (UID: \"855c2e17-ce4e-4541-a378-882268d22af4\") " Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.471531 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "855c2e17-ce4e-4541-a378-882268d22af4" (UID: "855c2e17-ce4e-4541-a378-882268d22af4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.479777 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-scripts" (OuterVolumeSpecName: "scripts") pod "855c2e17-ce4e-4541-a378-882268d22af4" (UID: "855c2e17-ce4e-4541-a378-882268d22af4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.485709 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "855c2e17-ce4e-4541-a378-882268d22af4" (UID: "855c2e17-ce4e-4541-a378-882268d22af4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.485831 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855c2e17-ce4e-4541-a378-882268d22af4-kube-api-access-t6bqw" (OuterVolumeSpecName: "kube-api-access-t6bqw") pod "855c2e17-ce4e-4541-a378-882268d22af4" (UID: "855c2e17-ce4e-4541-a378-882268d22af4"). InnerVolumeSpecName "kube-api-access-t6bqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.509728 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "855c2e17-ce4e-4541-a378-882268d22af4" (UID: "855c2e17-ce4e-4541-a378-882268d22af4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.551538 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-config-data" (OuterVolumeSpecName: "config-data") pod "855c2e17-ce4e-4541-a378-882268d22af4" (UID: "855c2e17-ce4e-4541-a378-882268d22af4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.560230 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.560264 4797 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.560274 4797 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.560283 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.560293 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/855c2e17-ce4e-4541-a378-882268d22af4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:41 crc kubenswrapper[4797]: I0216 11:25:41.560301 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6bqw\" (UniqueName: \"kubernetes.io/projected/855c2e17-ce4e-4541-a378-882268d22af4-kube-api-access-t6bqw\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.367668 4797 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pode31bcde1-c735-4e57-907d-2876334827d6"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pode31bcde1-c735-4e57-907d-2876334827d6] : Timed out while waiting for systemd to remove kubepods-besteffort-pode31bcde1_c735_4e57_907d_2876334827d6.slice" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.377725 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cpv8d"] Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.391291 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cpv8d"] Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.473200 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4xnc7"] Feb 16 11:25:42 crc kubenswrapper[4797]: E0216 11:25:42.474017 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855c2e17-ce4e-4541-a378-882268d22af4" containerName="keystone-bootstrap" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.474034 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="855c2e17-ce4e-4541-a378-882268d22af4" containerName="keystone-bootstrap" Feb 16 11:25:42 crc kubenswrapper[4797]: E0216 11:25:42.474054 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93cd1d9c-b6c4-436d-8d64-0b00079f5f42" containerName="dnsmasq-dns" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.474060 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="93cd1d9c-b6c4-436d-8d64-0b00079f5f42" containerName="dnsmasq-dns" Feb 16 11:25:42 crc kubenswrapper[4797]: E0216 11:25:42.474071 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93cd1d9c-b6c4-436d-8d64-0b00079f5f42" containerName="init" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.474077 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="93cd1d9c-b6c4-436d-8d64-0b00079f5f42" containerName="init" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.474978 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="93cd1d9c-b6c4-436d-8d64-0b00079f5f42" containerName="dnsmasq-dns" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.474996 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="855c2e17-ce4e-4541-a378-882268d22af4" containerName="keystone-bootstrap" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.475713 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.477879 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.477978 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-kpt68" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.478016 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.478093 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.499800 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4xnc7"] Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.580943 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-config-data\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.581247 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-scripts\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.581372 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqkpw\" (UniqueName: \"kubernetes.io/projected/e3129f86-1462-4e40-8695-e4ae737ebf5f-kube-api-access-hqkpw\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.581467 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-credential-keys\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.581517 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-fernet-keys\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.581533 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-combined-ca-bundle\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.683186 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqkpw\" (UniqueName: \"kubernetes.io/projected/e3129f86-1462-4e40-8695-e4ae737ebf5f-kube-api-access-hqkpw\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.683282 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-credential-keys\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.683311 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-fernet-keys\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.683327 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-combined-ca-bundle\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.683421 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-config-data\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.683460 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-scripts\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.688787 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-scripts\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.689044 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-combined-ca-bundle\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.693463 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-config-data\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.696338 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-fernet-keys\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.705053 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-credential-keys\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.706562 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqkpw\" (UniqueName: \"kubernetes.io/projected/e3129f86-1462-4e40-8695-e4ae737ebf5f-kube-api-access-hqkpw\") pod \"keystone-bootstrap-4xnc7\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:42 crc kubenswrapper[4797]: I0216 11:25:42.824976 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:25:43 crc kubenswrapper[4797]: I0216 11:25:43.303227 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d1d5f264-2ed7-43a2-8179-53917835fc77","Type":"ContainerStarted","Data":"91565b0836d159431b55f15a2f46affad4b5f1d45b1964db277cb5191b73cb17"} Feb 16 11:25:43 crc kubenswrapper[4797]: I0216 11:25:43.999567 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="855c2e17-ce4e-4541-a378-882268d22af4" path="/var/lib/kubelet/pods/855c2e17-ce4e-4541-a378-882268d22af4/volumes" Feb 16 11:25:45 crc kubenswrapper[4797]: E0216 11:25:45.114768 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:25:45 crc kubenswrapper[4797]: E0216 11:25:45.115098 4797 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:25:45 crc kubenswrapper[4797]: E0216 11:25:45.115227 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fvxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dhgrw_openstack(895bed8d-c376-47ad-8fa6-3cf0f07399c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 11:25:45 crc kubenswrapper[4797]: E0216 11:25:45.116704 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:25:45 crc kubenswrapper[4797]: I0216 11:25:45.905627 4797 scope.go:117] "RemoveContainer" containerID="c48342649c1e8f8bcd1bd0ff4788c8187c6371859ad33df019ffbe39798c0ed8" Feb 16 11:25:46 crc kubenswrapper[4797]: I0216 11:25:46.480733 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:25:51 crc kubenswrapper[4797]: I0216 11:25:51.434417 4797 generic.go:334] "Generic (PLEG): container finished" podID="bbc2d12e-1b1b-43cc-baad-ff26e8423891" containerID="3a65263a2a7e396e956e7a22682c657dafad794a59d372110ff6b19e5b2691aa" exitCode=0 Feb 16 11:25:51 crc kubenswrapper[4797]: I0216 11:25:51.434603 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-grph6" event={"ID":"bbc2d12e-1b1b-43cc-baad-ff26e8423891","Type":"ContainerDied","Data":"3a65263a2a7e396e956e7a22682c657dafad794a59d372110ff6b19e5b2691aa"} Feb 16 11:25:56 crc kubenswrapper[4797]: E0216 11:25:56.985926 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.060370 4797 scope.go:117] "RemoveContainer" containerID="572f70926abfbe1c798b8f25cd435eacffea4e6d532a697199920f08ee09186f" Feb 16 11:25:57 crc kubenswrapper[4797]: E0216 11:25:57.061018 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"572f70926abfbe1c798b8f25cd435eacffea4e6d532a697199920f08ee09186f\": container with ID starting with 572f70926abfbe1c798b8f25cd435eacffea4e6d532a697199920f08ee09186f not found: ID does not exist" containerID="572f70926abfbe1c798b8f25cd435eacffea4e6d532a697199920f08ee09186f" Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.061068 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"572f70926abfbe1c798b8f25cd435eacffea4e6d532a697199920f08ee09186f"} err="failed to get container status \"572f70926abfbe1c798b8f25cd435eacffea4e6d532a697199920f08ee09186f\": rpc error: code = NotFound desc = could not find container \"572f70926abfbe1c798b8f25cd435eacffea4e6d532a697199920f08ee09186f\": container with ID starting with 572f70926abfbe1c798b8f25cd435eacffea4e6d532a697199920f08ee09186f not found: ID does not exist" Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.061100 4797 scope.go:117] "RemoveContainer" containerID="c48342649c1e8f8bcd1bd0ff4788c8187c6371859ad33df019ffbe39798c0ed8" Feb 16 11:25:57 crc kubenswrapper[4797]: E0216 11:25:57.061634 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c48342649c1e8f8bcd1bd0ff4788c8187c6371859ad33df019ffbe39798c0ed8\": container with ID starting with c48342649c1e8f8bcd1bd0ff4788c8187c6371859ad33df019ffbe39798c0ed8 not found: ID does not exist" containerID="c48342649c1e8f8bcd1bd0ff4788c8187c6371859ad33df019ffbe39798c0ed8" Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.061660 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c48342649c1e8f8bcd1bd0ff4788c8187c6371859ad33df019ffbe39798c0ed8"} err="failed to get container status \"c48342649c1e8f8bcd1bd0ff4788c8187c6371859ad33df019ffbe39798c0ed8\": rpc error: code = NotFound desc = could not find container \"c48342649c1e8f8bcd1bd0ff4788c8187c6371859ad33df019ffbe39798c0ed8\": container with ID starting with c48342649c1e8f8bcd1bd0ff4788c8187c6371859ad33df019ffbe39798c0ed8 not found: ID does not exist" Feb 16 11:25:57 crc kubenswrapper[4797]: W0216 11:25:57.084492 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0965af42_84ad_45d8_9516_4d835e8e2242.slice/crio-567fa56e72b909eb601c79b8b5fb4f7e25b50292e4e7370a40f49e45b652cc94 WatchSource:0}: Error finding container 567fa56e72b909eb601c79b8b5fb4f7e25b50292e4e7370a40f49e45b652cc94: Status 404 returned error can't find the container with id 567fa56e72b909eb601c79b8b5fb4f7e25b50292e4e7370a40f49e45b652cc94 Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.202953 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-grph6" Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.285227 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bbc2d12e-1b1b-43cc-baad-ff26e8423891-config\") pod \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\" (UID: \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\") " Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.285321 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6bxz\" (UniqueName: \"kubernetes.io/projected/bbc2d12e-1b1b-43cc-baad-ff26e8423891-kube-api-access-k6bxz\") pod \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\" (UID: \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\") " Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.285416 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc2d12e-1b1b-43cc-baad-ff26e8423891-combined-ca-bundle\") pod \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\" (UID: \"bbc2d12e-1b1b-43cc-baad-ff26e8423891\") " Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.290558 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc2d12e-1b1b-43cc-baad-ff26e8423891-kube-api-access-k6bxz" (OuterVolumeSpecName: "kube-api-access-k6bxz") pod "bbc2d12e-1b1b-43cc-baad-ff26e8423891" (UID: "bbc2d12e-1b1b-43cc-baad-ff26e8423891"). InnerVolumeSpecName "kube-api-access-k6bxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.311358 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc2d12e-1b1b-43cc-baad-ff26e8423891-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbc2d12e-1b1b-43cc-baad-ff26e8423891" (UID: "bbc2d12e-1b1b-43cc-baad-ff26e8423891"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.311516 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbc2d12e-1b1b-43cc-baad-ff26e8423891-config" (OuterVolumeSpecName: "config") pod "bbc2d12e-1b1b-43cc-baad-ff26e8423891" (UID: "bbc2d12e-1b1b-43cc-baad-ff26e8423891"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.388409 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bbc2d12e-1b1b-43cc-baad-ff26e8423891-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.388448 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6bxz\" (UniqueName: \"kubernetes.io/projected/bbc2d12e-1b1b-43cc-baad-ff26e8423891-kube-api-access-k6bxz\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.388466 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc2d12e-1b1b-43cc-baad-ff26e8423891-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.497474 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0965af42-84ad-45d8-9516-4d835e8e2242","Type":"ContainerStarted","Data":"567fa56e72b909eb601c79b8b5fb4f7e25b50292e4e7370a40f49e45b652cc94"} Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.498848 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-grph6" event={"ID":"bbc2d12e-1b1b-43cc-baad-ff26e8423891","Type":"ContainerDied","Data":"694111c6652b7867b9dce8297b9c8121554eb44b7a5e93ebc1f3feac7d5d511a"} Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.498876 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="694111c6652b7867b9dce8297b9c8121554eb44b7a5e93ebc1f3feac7d5d511a" Feb 16 11:25:57 crc kubenswrapper[4797]: I0216 11:25:57.498925 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-grph6" Feb 16 11:25:58 crc kubenswrapper[4797]: E0216 11:25:58.355090 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 11:25:58 crc kubenswrapper[4797]: E0216 11:25:58.355255 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nwnxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-2dfc7_openstack(062948d0-fd09-4e11-904d-a346a430ee4f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 11:25:58 crc kubenswrapper[4797]: E0216 11:25:58.357808 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-2dfc7" podUID="062948d0-fd09-4e11-904d-a346a430ee4f" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.471560 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mnvc5"] Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.521358 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gg8pv"] Feb 16 11:25:58 crc kubenswrapper[4797]: E0216 11:25:58.521794 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc2d12e-1b1b-43cc-baad-ff26e8423891" containerName="neutron-db-sync" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.521806 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc2d12e-1b1b-43cc-baad-ff26e8423891" containerName="neutron-db-sync" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.521985 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc2d12e-1b1b-43cc-baad-ff26e8423891" containerName="neutron-db-sync" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.522970 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: E0216 11:25:58.571611 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-2dfc7" podUID="062948d0-fd09-4e11-904d-a346a430ee4f" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.572436 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gg8pv"] Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.603448 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7dc766fb7b-kdzz7"] Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.605661 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.613019 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.613171 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.613318 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.613494 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-7dchp" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.625614 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.625721 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.626874 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-config\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.626916 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.626991 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r6km\" (UniqueName: \"kubernetes.io/projected/fff75593-1e2b-47c3-8219-2105ebaca44d-kube-api-access-2r6km\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.627068 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.632977 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7dc766fb7b-kdzz7"] Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.729238 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r6km\" (UniqueName: \"kubernetes.io/projected/fff75593-1e2b-47c3-8219-2105ebaca44d-kube-api-access-2r6km\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.729299 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.729353 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-config\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.729386 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.729440 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-combined-ca-bundle\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.729487 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-ovndb-tls-certs\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.729522 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.729546 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-httpd-config\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.729565 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-config\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.729600 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.729642 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-288wm\" (UniqueName: \"kubernetes.io/projected/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-kube-api-access-288wm\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.730479 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.730897 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.731827 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-config\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.732255 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-dns-svc\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.734406 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.749514 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r6km\" (UniqueName: \"kubernetes.io/projected/fff75593-1e2b-47c3-8219-2105ebaca44d-kube-api-access-2r6km\") pod \"dnsmasq-dns-55f844cf75-gg8pv\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.832647 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-httpd-config\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.832721 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-288wm\" (UniqueName: \"kubernetes.io/projected/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-kube-api-access-288wm\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.832794 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-config\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.832858 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-combined-ca-bundle\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.832901 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-ovndb-tls-certs\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.840090 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-httpd-config\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.840953 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-ovndb-tls-certs\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.841449 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-config\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.847184 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-combined-ca-bundle\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.848000 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-288wm\" (UniqueName: \"kubernetes.io/projected/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-kube-api-access-288wm\") pod \"neutron-7dc766fb7b-kdzz7\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.891290 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4xnc7"] Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.948013 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:25:58 crc kubenswrapper[4797]: I0216 11:25:58.952087 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.599979 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d1d5f264-2ed7-43a2-8179-53917835fc77","Type":"ContainerStarted","Data":"ef3c1afcf981a70f9eed6a2bd4fdb20463a048b7c23a962437e282ead9fd0165"} Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.606662 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gg8pv"] Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.608873 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-z8bpc" event={"ID":"24fea779-c008-4fda-b2d0-e3201f7dfaed","Type":"ContainerStarted","Data":"9fad625b91c4f4a963210889c31deed8e2cf4bc1eb4474ee4bb40520b5de9912"} Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.641597 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-z8bpc" podStartSLOduration=3.361627531 podStartE2EDuration="29.641563144s" podCreationTimestamp="2026-02-16 11:25:30 +0000 UTC" firstStartedPulling="2026-02-16 11:25:31.98336161 +0000 UTC m=+1126.703546590" lastFinishedPulling="2026-02-16 11:25:58.263297223 +0000 UTC m=+1152.983482203" observedRunningTime="2026-02-16 11:25:59.640619719 +0000 UTC m=+1154.360804719" watchObservedRunningTime="2026-02-16 11:25:59.641563144 +0000 UTC m=+1154.361748144" Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.648447 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5kmr4" event={"ID":"35f90c62-8793-4bcc-8b06-9b0b710776d7","Type":"ContainerStarted","Data":"445198b22303290a64416d26d63969ce6ab88bfa4a565134ec5bea1c726106a2"} Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.668719 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" event={"ID":"ac2e5b3c-86cc-42ce-bc7a-630034757e55","Type":"ContainerStarted","Data":"05c7b8a08071af114fceae622e7439c4fb7b6442316f20ec5441fac3227b29c1"} Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.668988 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" podUID="ac2e5b3c-86cc-42ce-bc7a-630034757e55" containerName="dnsmasq-dns" containerID="cri-o://05c7b8a08071af114fceae622e7439c4fb7b6442316f20ec5441fac3227b29c1" gracePeriod=10 Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.670348 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.679275 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0649e0a-7249-45bd-ad8f-6c7e61456322","Type":"ContainerStarted","Data":"34fab4cc55adc1a4ff05f3d8123f52bbaf777fcbcf8b214b844d1782f191e045"} Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.681311 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-5kmr4" podStartSLOduration=3.326135931 podStartE2EDuration="29.681295091s" podCreationTimestamp="2026-02-16 11:25:30 +0000 UTC" firstStartedPulling="2026-02-16 11:25:31.937888727 +0000 UTC m=+1126.658073707" lastFinishedPulling="2026-02-16 11:25:58.293047887 +0000 UTC m=+1153.013232867" observedRunningTime="2026-02-16 11:25:59.673608851 +0000 UTC m=+1154.393793831" watchObservedRunningTime="2026-02-16 11:25:59.681295091 +0000 UTC m=+1154.401480071" Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.689086 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"76a621c6-7221-46cd-8385-2c733893ccd0","Type":"ContainerStarted","Data":"b9e0d5113aa1822c9567f3dfe9e2d721b364186386c4e6200d2e8de4f6f4231b"} Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.695672 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4xnc7" event={"ID":"e3129f86-1462-4e40-8695-e4ae737ebf5f","Type":"ContainerStarted","Data":"98a70302fbf2b9c6b1e49c622daa3a979d6e1cafc85f5c6e3f75d36132846f5d"} Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.695712 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4xnc7" event={"ID":"e3129f86-1462-4e40-8695-e4ae737ebf5f","Type":"ContainerStarted","Data":"0f52b435b0e6a0bc035ec9957bafb004a2bab390d16d027302c6c7e08f262d00"} Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.714128 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" podStartSLOduration=24.714100128 podStartE2EDuration="24.714100128s" podCreationTimestamp="2026-02-16 11:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:25:59.694928683 +0000 UTC m=+1154.415113663" watchObservedRunningTime="2026-02-16 11:25:59.714100128 +0000 UTC m=+1154.434285108" Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.740854 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=43.740829739 podStartE2EDuration="43.740829739s" podCreationTimestamp="2026-02-16 11:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:25:59.728640245 +0000 UTC m=+1154.448825225" watchObservedRunningTime="2026-02-16 11:25:59.740829739 +0000 UTC m=+1154.461014719" Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.830173 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4xnc7" podStartSLOduration=17.830145541 podStartE2EDuration="17.830145541s" podCreationTimestamp="2026-02-16 11:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:25:59.743215064 +0000 UTC m=+1154.463400044" watchObservedRunningTime="2026-02-16 11:25:59.830145541 +0000 UTC m=+1154.550330531" Feb 16 11:25:59 crc kubenswrapper[4797]: I0216 11:25:59.898141 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7dc766fb7b-kdzz7"] Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.603995 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.684145 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4jvl\" (UniqueName: \"kubernetes.io/projected/ac2e5b3c-86cc-42ce-bc7a-630034757e55-kube-api-access-n4jvl\") pod \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.684264 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-config\") pod \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.684304 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-ovsdbserver-sb\") pod \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.684353 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-ovsdbserver-nb\") pod \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.684415 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-dns-svc\") pod \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.684442 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-dns-swift-storage-0\") pod \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\" (UID: \"ac2e5b3c-86cc-42ce-bc7a-630034757e55\") " Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.709712 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac2e5b3c-86cc-42ce-bc7a-630034757e55-kube-api-access-n4jvl" (OuterVolumeSpecName: "kube-api-access-n4jvl") pod "ac2e5b3c-86cc-42ce-bc7a-630034757e55" (UID: "ac2e5b3c-86cc-42ce-bc7a-630034757e55"). InnerVolumeSpecName "kube-api-access-n4jvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.727191 4797 generic.go:334] "Generic (PLEG): container finished" podID="fff75593-1e2b-47c3-8219-2105ebaca44d" containerID="7f53a00fe26c22b5db44fa88ad192b7c63ecb6b675d1fb118f6e07121842436a" exitCode=0 Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.727258 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" event={"ID":"fff75593-1e2b-47c3-8219-2105ebaca44d","Type":"ContainerDied","Data":"7f53a00fe26c22b5db44fa88ad192b7c63ecb6b675d1fb118f6e07121842436a"} Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.727283 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" event={"ID":"fff75593-1e2b-47c3-8219-2105ebaca44d","Type":"ContainerStarted","Data":"62d812a303c5dae2edde8b085d7ee9cf8ae1b1ff7eb74ecd5755dee333158e14"} Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.734135 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc766fb7b-kdzz7" event={"ID":"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c","Type":"ContainerStarted","Data":"c132e0058d25ce2d261cf11bfeef444d21c0c5ecffbbc497a6d132e050a04673"} Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.734179 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc766fb7b-kdzz7" event={"ID":"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c","Type":"ContainerStarted","Data":"074f386ff93c1a2bc99450160d476bdbfb6ac27f2b1b9b670fd0e46e9114d158"} Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.740271 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0965af42-84ad-45d8-9516-4d835e8e2242","Type":"ContainerStarted","Data":"cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad"} Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.741629 4797 generic.go:334] "Generic (PLEG): container finished" podID="ac2e5b3c-86cc-42ce-bc7a-630034757e55" containerID="05c7b8a08071af114fceae622e7439c4fb7b6442316f20ec5441fac3227b29c1" exitCode=0 Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.742498 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.742673 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" event={"ID":"ac2e5b3c-86cc-42ce-bc7a-630034757e55","Type":"ContainerDied","Data":"05c7b8a08071af114fceae622e7439c4fb7b6442316f20ec5441fac3227b29c1"} Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.742701 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-mnvc5" event={"ID":"ac2e5b3c-86cc-42ce-bc7a-630034757e55","Type":"ContainerDied","Data":"ec125afb7f4b513bc05e5f566bfca59ab4c2ff7351bcd970948cb183fa0912d6"} Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.742717 4797 scope.go:117] "RemoveContainer" containerID="05c7b8a08071af114fceae622e7439c4fb7b6442316f20ec5441fac3227b29c1" Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.791159 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4jvl\" (UniqueName: \"kubernetes.io/projected/ac2e5b3c-86cc-42ce-bc7a-630034757e55-kube-api-access-n4jvl\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.815270 4797 scope.go:117] "RemoveContainer" containerID="e608ba0b3f1cfc5c677398fed0d1b63d71801db3eaa97665bb8db8529d867233" Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.857455 4797 scope.go:117] "RemoveContainer" containerID="05c7b8a08071af114fceae622e7439c4fb7b6442316f20ec5441fac3227b29c1" Feb 16 11:26:00 crc kubenswrapper[4797]: E0216 11:26:00.859208 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05c7b8a08071af114fceae622e7439c4fb7b6442316f20ec5441fac3227b29c1\": container with ID starting with 05c7b8a08071af114fceae622e7439c4fb7b6442316f20ec5441fac3227b29c1 not found: ID does not exist" containerID="05c7b8a08071af114fceae622e7439c4fb7b6442316f20ec5441fac3227b29c1" Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.859443 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05c7b8a08071af114fceae622e7439c4fb7b6442316f20ec5441fac3227b29c1"} err="failed to get container status \"05c7b8a08071af114fceae622e7439c4fb7b6442316f20ec5441fac3227b29c1\": rpc error: code = NotFound desc = could not find container \"05c7b8a08071af114fceae622e7439c4fb7b6442316f20ec5441fac3227b29c1\": container with ID starting with 05c7b8a08071af114fceae622e7439c4fb7b6442316f20ec5441fac3227b29c1 not found: ID does not exist" Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.859499 4797 scope.go:117] "RemoveContainer" containerID="e608ba0b3f1cfc5c677398fed0d1b63d71801db3eaa97665bb8db8529d867233" Feb 16 11:26:00 crc kubenswrapper[4797]: E0216 11:26:00.862328 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e608ba0b3f1cfc5c677398fed0d1b63d71801db3eaa97665bb8db8529d867233\": container with ID starting with e608ba0b3f1cfc5c677398fed0d1b63d71801db3eaa97665bb8db8529d867233 not found: ID does not exist" containerID="e608ba0b3f1cfc5c677398fed0d1b63d71801db3eaa97665bb8db8529d867233" Feb 16 11:26:00 crc kubenswrapper[4797]: I0216 11:26:00.862376 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e608ba0b3f1cfc5c677398fed0d1b63d71801db3eaa97665bb8db8529d867233"} err="failed to get container status \"e608ba0b3f1cfc5c677398fed0d1b63d71801db3eaa97665bb8db8529d867233\": rpc error: code = NotFound desc = could not find container \"e608ba0b3f1cfc5c677398fed0d1b63d71801db3eaa97665bb8db8529d867233\": container with ID starting with e608ba0b3f1cfc5c677398fed0d1b63d71801db3eaa97665bb8db8529d867233 not found: ID does not exist" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.243219 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ac2e5b3c-86cc-42ce-bc7a-630034757e55" (UID: "ac2e5b3c-86cc-42ce-bc7a-630034757e55"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.247508 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac2e5b3c-86cc-42ce-bc7a-630034757e55" (UID: "ac2e5b3c-86cc-42ce-bc7a-630034757e55"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.254252 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac2e5b3c-86cc-42ce-bc7a-630034757e55" (UID: "ac2e5b3c-86cc-42ce-bc7a-630034757e55"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.295638 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac2e5b3c-86cc-42ce-bc7a-630034757e55" (UID: "ac2e5b3c-86cc-42ce-bc7a-630034757e55"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.296787 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-config" (OuterVolumeSpecName: "config") pod "ac2e5b3c-86cc-42ce-bc7a-630034757e55" (UID: "ac2e5b3c-86cc-42ce-bc7a-630034757e55"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.305923 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.305968 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.305982 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.305992 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.306002 4797 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac2e5b3c-86cc-42ce-bc7a-630034757e55-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.329808 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-664675cd85-bc4lp"] Feb 16 11:26:01 crc kubenswrapper[4797]: E0216 11:26:01.330184 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac2e5b3c-86cc-42ce-bc7a-630034757e55" containerName="dnsmasq-dns" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.330200 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac2e5b3c-86cc-42ce-bc7a-630034757e55" containerName="dnsmasq-dns" Feb 16 11:26:01 crc kubenswrapper[4797]: E0216 11:26:01.330244 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac2e5b3c-86cc-42ce-bc7a-630034757e55" containerName="init" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.330251 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac2e5b3c-86cc-42ce-bc7a-630034757e55" containerName="init" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.330433 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac2e5b3c-86cc-42ce-bc7a-630034757e55" containerName="dnsmasq-dns" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.331753 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.333557 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.333668 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.344188 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-664675cd85-bc4lp"] Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.408330 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-combined-ca-bundle\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.408427 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-internal-tls-certs\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.408460 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-httpd-config\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.408524 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-public-tls-certs\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.408551 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-ovndb-tls-certs\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.408641 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whlwx\" (UniqueName: \"kubernetes.io/projected/a81700a8-2372-4b4d-a769-5b6936ac7aba-kube-api-access-whlwx\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.408704 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-config\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.508007 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mnvc5"] Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.510170 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whlwx\" (UniqueName: \"kubernetes.io/projected/a81700a8-2372-4b4d-a769-5b6936ac7aba-kube-api-access-whlwx\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.510288 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-config\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.510334 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-combined-ca-bundle\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.510400 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-internal-tls-certs\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.510445 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-httpd-config\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.510505 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-public-tls-certs\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.510536 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-ovndb-tls-certs\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.517771 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-internal-tls-certs\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.530071 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mnvc5"] Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.530309 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-ovndb-tls-certs\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.532505 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-httpd-config\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.534137 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-combined-ca-bundle\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.536249 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-config\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.547712 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-public-tls-certs\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.555286 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whlwx\" (UniqueName: \"kubernetes.io/projected/a81700a8-2372-4b4d-a769-5b6936ac7aba-kube-api-access-whlwx\") pod \"neutron-664675cd85-bc4lp\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.748450 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.748553 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.754236 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.755753 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0965af42-84ad-45d8-9516-4d835e8e2242","Type":"ContainerStarted","Data":"3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e"} Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.755907 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0965af42-84ad-45d8-9516-4d835e8e2242" containerName="glance-log" containerID="cri-o://cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad" gracePeriod=30 Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.756171 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0965af42-84ad-45d8-9516-4d835e8e2242" containerName="glance-httpd" containerID="cri-o://3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e" gracePeriod=30 Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.764787 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d1d5f264-2ed7-43a2-8179-53917835fc77","Type":"ContainerStarted","Data":"2eb48dd7d26f1fee3bdfd0bd308f9cba173d4152dd4d9af00bd2f3d5c05303fd"} Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.764826 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d1d5f264-2ed7-43a2-8179-53917835fc77" containerName="glance-log" containerID="cri-o://ef3c1afcf981a70f9eed6a2bd4fdb20463a048b7c23a962437e282ead9fd0165" gracePeriod=30 Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.764922 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d1d5f264-2ed7-43a2-8179-53917835fc77" containerName="glance-httpd" containerID="cri-o://2eb48dd7d26f1fee3bdfd0bd308f9cba173d4152dd4d9af00bd2f3d5c05303fd" gracePeriod=30 Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.767616 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc766fb7b-kdzz7" event={"ID":"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c","Type":"ContainerStarted","Data":"ca36d862979bc5ce0af35447b3630046aac7e584dfc56a5a3af7c9b2cea22fcc"} Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.767973 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.774864 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.786686 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.879558 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=26.879534879 podStartE2EDuration="26.879534879s" podCreationTimestamp="2026-02-16 11:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:01.856449498 +0000 UTC m=+1156.576634478" watchObservedRunningTime="2026-02-16 11:26:01.879534879 +0000 UTC m=+1156.599719859" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.914101 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=26.914051623 podStartE2EDuration="26.914051623s" podCreationTimestamp="2026-02-16 11:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:01.892882164 +0000 UTC m=+1156.613067144" watchObservedRunningTime="2026-02-16 11:26:01.914051623 +0000 UTC m=+1156.634236613" Feb 16 11:26:01 crc kubenswrapper[4797]: I0216 11:26:01.947943 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7dc766fb7b-kdzz7" podStartSLOduration=3.947924659 podStartE2EDuration="3.947924659s" podCreationTimestamp="2026-02-16 11:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:01.925379333 +0000 UTC m=+1156.645564303" watchObservedRunningTime="2026-02-16 11:26:01.947924659 +0000 UTC m=+1156.668109629" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.000654 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac2e5b3c-86cc-42ce-bc7a-630034757e55" path="/var/lib/kubelet/pods/ac2e5b3c-86cc-42ce-bc7a-630034757e55/volumes" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.737788 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.784882 4797 generic.go:334] "Generic (PLEG): container finished" podID="d1d5f264-2ed7-43a2-8179-53917835fc77" containerID="2eb48dd7d26f1fee3bdfd0bd308f9cba173d4152dd4d9af00bd2f3d5c05303fd" exitCode=0 Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.784909 4797 generic.go:334] "Generic (PLEG): container finished" podID="d1d5f264-2ed7-43a2-8179-53917835fc77" containerID="ef3c1afcf981a70f9eed6a2bd4fdb20463a048b7c23a962437e282ead9fd0165" exitCode=143 Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.784949 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d1d5f264-2ed7-43a2-8179-53917835fc77","Type":"ContainerDied","Data":"2eb48dd7d26f1fee3bdfd0bd308f9cba173d4152dd4d9af00bd2f3d5c05303fd"} Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.784974 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d1d5f264-2ed7-43a2-8179-53917835fc77","Type":"ContainerDied","Data":"ef3c1afcf981a70f9eed6a2bd4fdb20463a048b7c23a962437e282ead9fd0165"} Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.803956 4797 generic.go:334] "Generic (PLEG): container finished" podID="0965af42-84ad-45d8-9516-4d835e8e2242" containerID="3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e" exitCode=0 Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.804011 4797 generic.go:334] "Generic (PLEG): container finished" podID="0965af42-84ad-45d8-9516-4d835e8e2242" containerID="cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad" exitCode=143 Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.804053 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0965af42-84ad-45d8-9516-4d835e8e2242","Type":"ContainerDied","Data":"3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e"} Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.804109 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0965af42-84ad-45d8-9516-4d835e8e2242","Type":"ContainerDied","Data":"cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad"} Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.804118 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0965af42-84ad-45d8-9516-4d835e8e2242","Type":"ContainerDied","Data":"567fa56e72b909eb601c79b8b5fb4f7e25b50292e4e7370a40f49e45b652cc94"} Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.804133 4797 scope.go:117] "RemoveContainer" containerID="3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.804299 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.812199 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0649e0a-7249-45bd-ad8f-6c7e61456322","Type":"ContainerStarted","Data":"3a4fec4acf58ad5024698898910bc1c83246e0892c3afd601b13010a18c5474e"} Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.815140 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" event={"ID":"fff75593-1e2b-47c3-8219-2105ebaca44d","Type":"ContainerStarted","Data":"ec0e0d40a258184eb2adec76a9dc0ce4f6800bc623248761ef6aa27c9d6c9635"} Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.815301 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.818937 4797 generic.go:334] "Generic (PLEG): container finished" podID="35f90c62-8793-4bcc-8b06-9b0b710776d7" containerID="445198b22303290a64416d26d63969ce6ab88bfa4a565134ec5bea1c726106a2" exitCode=0 Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.819039 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5kmr4" event={"ID":"35f90c62-8793-4bcc-8b06-9b0b710776d7","Type":"ContainerDied","Data":"445198b22303290a64416d26d63969ce6ab88bfa4a565134ec5bea1c726106a2"} Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.845802 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-scripts\") pod \"0965af42-84ad-45d8-9516-4d835e8e2242\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.845839 4797 scope.go:117] "RemoveContainer" containerID="cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.846168 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"0965af42-84ad-45d8-9516-4d835e8e2242\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.846210 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-combined-ca-bundle\") pod \"0965af42-84ad-45d8-9516-4d835e8e2242\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.846236 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtprl\" (UniqueName: \"kubernetes.io/projected/0965af42-84ad-45d8-9516-4d835e8e2242-kube-api-access-vtprl\") pod \"0965af42-84ad-45d8-9516-4d835e8e2242\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.846281 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-config-data\") pod \"0965af42-84ad-45d8-9516-4d835e8e2242\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.846330 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0965af42-84ad-45d8-9516-4d835e8e2242-httpd-run\") pod \"0965af42-84ad-45d8-9516-4d835e8e2242\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.846363 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0965af42-84ad-45d8-9516-4d835e8e2242-logs\") pod \"0965af42-84ad-45d8-9516-4d835e8e2242\" (UID: \"0965af42-84ad-45d8-9516-4d835e8e2242\") " Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.847134 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0965af42-84ad-45d8-9516-4d835e8e2242-logs" (OuterVolumeSpecName: "logs") pod "0965af42-84ad-45d8-9516-4d835e8e2242" (UID: "0965af42-84ad-45d8-9516-4d835e8e2242"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.847124 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0965af42-84ad-45d8-9516-4d835e8e2242-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0965af42-84ad-45d8-9516-4d835e8e2242" (UID: "0965af42-84ad-45d8-9516-4d835e8e2242"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.860620 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" podStartSLOduration=4.860555439 podStartE2EDuration="4.860555439s" podCreationTimestamp="2026-02-16 11:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:02.833690585 +0000 UTC m=+1157.553875565" watchObservedRunningTime="2026-02-16 11:26:02.860555439 +0000 UTC m=+1157.580740419" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.860794 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-scripts" (OuterVolumeSpecName: "scripts") pod "0965af42-84ad-45d8-9516-4d835e8e2242" (UID: "0965af42-84ad-45d8-9516-4d835e8e2242"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.866962 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0965af42-84ad-45d8-9516-4d835e8e2242-kube-api-access-vtprl" (OuterVolumeSpecName: "kube-api-access-vtprl") pod "0965af42-84ad-45d8-9516-4d835e8e2242" (UID: "0965af42-84ad-45d8-9516-4d835e8e2242"). InnerVolumeSpecName "kube-api-access-vtprl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.871200 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0965af42-84ad-45d8-9516-4d835e8e2242" (UID: "0965af42-84ad-45d8-9516-4d835e8e2242"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.871518 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.885494 4797 scope.go:117] "RemoveContainer" containerID="3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e" Feb 16 11:26:02 crc kubenswrapper[4797]: E0216 11:26:02.887759 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e\": container with ID starting with 3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e not found: ID does not exist" containerID="3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.887884 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e"} err="failed to get container status \"3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e\": rpc error: code = NotFound desc = could not find container \"3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e\": container with ID starting with 3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e not found: ID does not exist" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.887992 4797 scope.go:117] "RemoveContainer" containerID="cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad" Feb 16 11:26:02 crc kubenswrapper[4797]: E0216 11:26:02.888443 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad\": container with ID starting with cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad not found: ID does not exist" containerID="cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.888540 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad"} err="failed to get container status \"cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad\": rpc error: code = NotFound desc = could not find container \"cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad\": container with ID starting with cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad not found: ID does not exist" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.888658 4797 scope.go:117] "RemoveContainer" containerID="3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.889299 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e"} err="failed to get container status \"3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e\": rpc error: code = NotFound desc = could not find container \"3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e\": container with ID starting with 3c64da90e8b2970cadb427442a8d767edc757f20bfaeaa54053bd9acd0f3371e not found: ID does not exist" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.889350 4797 scope.go:117] "RemoveContainer" containerID="cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.889450 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82" (OuterVolumeSpecName: "glance") pod "0965af42-84ad-45d8-9516-4d835e8e2242" (UID: "0965af42-84ad-45d8-9516-4d835e8e2242"). InnerVolumeSpecName "pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.889658 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad"} err="failed to get container status \"cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad\": rpc error: code = NotFound desc = could not find container \"cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad\": container with ID starting with cd47742449318e2ab5b455960e5d686757975fa4a1dc036a0fe9f022bd4242ad not found: ID does not exist" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.952018 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.952057 4797 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") on node \"crc\" " Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.952077 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.952087 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtprl\" (UniqueName: \"kubernetes.io/projected/0965af42-84ad-45d8-9516-4d835e8e2242-kube-api-access-vtprl\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.952121 4797 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0965af42-84ad-45d8-9516-4d835e8e2242-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.952145 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0965af42-84ad-45d8-9516-4d835e8e2242-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.974146 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-config-data" (OuterVolumeSpecName: "config-data") pod "0965af42-84ad-45d8-9516-4d835e8e2242" (UID: "0965af42-84ad-45d8-9516-4d835e8e2242"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.979020 4797 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 11:26:02 crc kubenswrapper[4797]: I0216 11:26:02.979297 4797 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82") on node "crc" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.054789 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-664675cd85-bc4lp"] Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.055516 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"d1d5f264-2ed7-43a2-8179-53917835fc77\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.055601 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f264-2ed7-43a2-8179-53917835fc77-logs\") pod \"d1d5f264-2ed7-43a2-8179-53917835fc77\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.055736 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5knw\" (UniqueName: \"kubernetes.io/projected/d1d5f264-2ed7-43a2-8179-53917835fc77-kube-api-access-p5knw\") pod \"d1d5f264-2ed7-43a2-8179-53917835fc77\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.055778 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-combined-ca-bundle\") pod \"d1d5f264-2ed7-43a2-8179-53917835fc77\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.055876 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-scripts\") pod \"d1d5f264-2ed7-43a2-8179-53917835fc77\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.055917 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-config-data\") pod \"d1d5f264-2ed7-43a2-8179-53917835fc77\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.055977 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f264-2ed7-43a2-8179-53917835fc77-httpd-run\") pod \"d1d5f264-2ed7-43a2-8179-53917835fc77\" (UID: \"d1d5f264-2ed7-43a2-8179-53917835fc77\") " Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.056456 4797 reconciler_common.go:293] "Volume detached for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.056471 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0965af42-84ad-45d8-9516-4d835e8e2242-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.056804 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1d5f264-2ed7-43a2-8179-53917835fc77-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d1d5f264-2ed7-43a2-8179-53917835fc77" (UID: "d1d5f264-2ed7-43a2-8179-53917835fc77"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.062423 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d5f264-2ed7-43a2-8179-53917835fc77-kube-api-access-p5knw" (OuterVolumeSpecName: "kube-api-access-p5knw") pod "d1d5f264-2ed7-43a2-8179-53917835fc77" (UID: "d1d5f264-2ed7-43a2-8179-53917835fc77"). InnerVolumeSpecName "kube-api-access-p5knw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.062852 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1d5f264-2ed7-43a2-8179-53917835fc77-logs" (OuterVolumeSpecName: "logs") pod "d1d5f264-2ed7-43a2-8179-53917835fc77" (UID: "d1d5f264-2ed7-43a2-8179-53917835fc77"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.066442 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-scripts" (OuterVolumeSpecName: "scripts") pod "d1d5f264-2ed7-43a2-8179-53917835fc77" (UID: "d1d5f264-2ed7-43a2-8179-53917835fc77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.081088 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4" (OuterVolumeSpecName: "glance") pod "d1d5f264-2ed7-43a2-8179-53917835fc77" (UID: "d1d5f264-2ed7-43a2-8179-53917835fc77"). InnerVolumeSpecName "pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.088129 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1d5f264-2ed7-43a2-8179-53917835fc77" (UID: "d1d5f264-2ed7-43a2-8179-53917835fc77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.123754 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-config-data" (OuterVolumeSpecName: "config-data") pod "d1d5f264-2ed7-43a2-8179-53917835fc77" (UID: "d1d5f264-2ed7-43a2-8179-53917835fc77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.138453 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.159108 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f264-2ed7-43a2-8179-53917835fc77-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.159153 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5knw\" (UniqueName: \"kubernetes.io/projected/d1d5f264-2ed7-43a2-8179-53917835fc77-kube-api-access-p5knw\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.159163 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.159171 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.159179 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d5f264-2ed7-43a2-8179-53917835fc77-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.159187 4797 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1d5f264-2ed7-43a2-8179-53917835fc77-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.159245 4797 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") on node \"crc\" " Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.160052 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.196925 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:26:03 crc kubenswrapper[4797]: E0216 11:26:03.197315 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0965af42-84ad-45d8-9516-4d835e8e2242" containerName="glance-httpd" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.197332 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="0965af42-84ad-45d8-9516-4d835e8e2242" containerName="glance-httpd" Feb 16 11:26:03 crc kubenswrapper[4797]: E0216 11:26:03.197344 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0965af42-84ad-45d8-9516-4d835e8e2242" containerName="glance-log" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.197350 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="0965af42-84ad-45d8-9516-4d835e8e2242" containerName="glance-log" Feb 16 11:26:03 crc kubenswrapper[4797]: E0216 11:26:03.197373 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d5f264-2ed7-43a2-8179-53917835fc77" containerName="glance-log" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.197379 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d5f264-2ed7-43a2-8179-53917835fc77" containerName="glance-log" Feb 16 11:26:03 crc kubenswrapper[4797]: E0216 11:26:03.197401 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d5f264-2ed7-43a2-8179-53917835fc77" containerName="glance-httpd" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.197406 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d5f264-2ed7-43a2-8179-53917835fc77" containerName="glance-httpd" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.197607 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d5f264-2ed7-43a2-8179-53917835fc77" containerName="glance-httpd" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.197625 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d5f264-2ed7-43a2-8179-53917835fc77" containerName="glance-log" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.197638 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="0965af42-84ad-45d8-9516-4d835e8e2242" containerName="glance-httpd" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.197655 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="0965af42-84ad-45d8-9516-4d835e8e2242" containerName="glance-log" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.198590 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.202055 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.202276 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.244428 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.260457 4797 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.260624 4797 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4") on node "crc" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.261284 4797 reconciler_common.go:293] "Volume detached for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.363621 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e45357a0-d18d-4114-8598-5fa948443f32-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.363736 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.363776 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.363819 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.364065 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.364426 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e45357a0-d18d-4114-8598-5fa948443f32-logs\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.364516 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.364833 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ggxx\" (UniqueName: \"kubernetes.io/projected/e45357a0-d18d-4114-8598-5fa948443f32-kube-api-access-5ggxx\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.467410 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e45357a0-d18d-4114-8598-5fa948443f32-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.467497 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.467748 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.467776 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.468899 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e45357a0-d18d-4114-8598-5fa948443f32-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.468963 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.469359 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e45357a0-d18d-4114-8598-5fa948443f32-logs\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.469393 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.469443 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ggxx\" (UniqueName: \"kubernetes.io/projected/e45357a0-d18d-4114-8598-5fa948443f32-kube-api-access-5ggxx\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.469828 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e45357a0-d18d-4114-8598-5fa948443f32-logs\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.472610 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.474391 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.474429 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/27e379195fe32f84d1c9f17b5c57278f71c5a261b9f037c02b6f4c2041aa5cbc/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.474692 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.476437 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.477414 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.491488 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ggxx\" (UniqueName: \"kubernetes.io/projected/e45357a0-d18d-4114-8598-5fa948443f32-kube-api-access-5ggxx\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.524565 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"glance-default-internal-api-0\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.580634 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.840283 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d1d5f264-2ed7-43a2-8179-53917835fc77","Type":"ContainerDied","Data":"91565b0836d159431b55f15a2f46affad4b5f1d45b1964db277cb5191b73cb17"} Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.840344 4797 scope.go:117] "RemoveContainer" containerID="2eb48dd7d26f1fee3bdfd0bd308f9cba173d4152dd4d9af00bd2f3d5c05303fd" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.840676 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.845444 4797 generic.go:334] "Generic (PLEG): container finished" podID="24fea779-c008-4fda-b2d0-e3201f7dfaed" containerID="9fad625b91c4f4a963210889c31deed8e2cf4bc1eb4474ee4bb40520b5de9912" exitCode=0 Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.845493 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-z8bpc" event={"ID":"24fea779-c008-4fda-b2d0-e3201f7dfaed","Type":"ContainerDied","Data":"9fad625b91c4f4a963210889c31deed8e2cf4bc1eb4474ee4bb40520b5de9912"} Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.876431 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-664675cd85-bc4lp" event={"ID":"a81700a8-2372-4b4d-a769-5b6936ac7aba","Type":"ContainerStarted","Data":"5a4348f49bfc20b82cc85fd8e572727c854342e84dc19a338bc49230af9a08db"} Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.876778 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-664675cd85-bc4lp" event={"ID":"a81700a8-2372-4b4d-a769-5b6936ac7aba","Type":"ContainerStarted","Data":"e2b2843301100c20f6d491d59b77cb561092131dec8e408574b216fc6ab4b253"} Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.876802 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.876813 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-664675cd85-bc4lp" event={"ID":"a81700a8-2372-4b4d-a769-5b6936ac7aba","Type":"ContainerStarted","Data":"ac8bfcd3a5cee296b72d2edaa69ac0ed498ba1022de610726f72e26aa8dce8d9"} Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.880826 4797 generic.go:334] "Generic (PLEG): container finished" podID="e3129f86-1462-4e40-8695-e4ae737ebf5f" containerID="98a70302fbf2b9c6b1e49c622daa3a979d6e1cafc85f5c6e3f75d36132846f5d" exitCode=0 Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.881204 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4xnc7" event={"ID":"e3129f86-1462-4e40-8695-e4ae737ebf5f","Type":"ContainerDied","Data":"98a70302fbf2b9c6b1e49c622daa3a979d6e1cafc85f5c6e3f75d36132846f5d"} Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.906484 4797 scope.go:117] "RemoveContainer" containerID="ef3c1afcf981a70f9eed6a2bd4fdb20463a048b7c23a962437e282ead9fd0165" Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.922036 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:26:03 crc kubenswrapper[4797]: I0216 11:26:03.960651 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.058832 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0965af42-84ad-45d8-9516-4d835e8e2242" path="/var/lib/kubelet/pods/0965af42-84ad-45d8-9516-4d835e8e2242/volumes" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.059804 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1d5f264-2ed7-43a2-8179-53917835fc77" path="/var/lib/kubelet/pods/d1d5f264-2ed7-43a2-8179-53917835fc77/volumes" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.060407 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.063706 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.066649 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.067767 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.071506 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.077593 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-664675cd85-bc4lp" podStartSLOduration=3.077559452 podStartE2EDuration="3.077559452s" podCreationTimestamp="2026-02-16 11:26:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:03.949056239 +0000 UTC m=+1158.669241229" watchObservedRunningTime="2026-02-16 11:26:04.077559452 +0000 UTC m=+1158.797744432" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.184834 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7692da44-8fff-4c27-8069-4278620f1d55-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.184902 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.184935 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.184974 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-config-data\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.184995 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h89vr\" (UniqueName: \"kubernetes.io/projected/7692da44-8fff-4c27-8069-4278620f1d55-kube-api-access-h89vr\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.185016 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.185062 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7692da44-8fff-4c27-8069-4278620f1d55-logs\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.185096 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-scripts\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.294595 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.294657 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.294705 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h89vr\" (UniqueName: \"kubernetes.io/projected/7692da44-8fff-4c27-8069-4278620f1d55-kube-api-access-h89vr\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.294727 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-config-data\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.294744 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.294794 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7692da44-8fff-4c27-8069-4278620f1d55-logs\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.294838 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-scripts\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.294898 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7692da44-8fff-4c27-8069-4278620f1d55-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.295404 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7692da44-8fff-4c27-8069-4278620f1d55-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.303400 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.304023 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7692da44-8fff-4c27-8069-4278620f1d55-logs\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.304652 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-config-data\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.317717 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.318215 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-scripts\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.338324 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h89vr\" (UniqueName: \"kubernetes.io/projected/7692da44-8fff-4c27-8069-4278620f1d55-kube-api-access-h89vr\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.354674 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.354708 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f8da30fb4c83d9deb7f001a58f922a696263527e837af7c4c51b5beb3f892969/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.366382 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.404653 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"glance-default-external-api-0\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.412873 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.527531 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5kmr4" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.601609 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jh6n\" (UniqueName: \"kubernetes.io/projected/35f90c62-8793-4bcc-8b06-9b0b710776d7-kube-api-access-5jh6n\") pod \"35f90c62-8793-4bcc-8b06-9b0b710776d7\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.601757 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f90c62-8793-4bcc-8b06-9b0b710776d7-logs\") pod \"35f90c62-8793-4bcc-8b06-9b0b710776d7\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.601869 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-config-data\") pod \"35f90c62-8793-4bcc-8b06-9b0b710776d7\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.601951 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-combined-ca-bundle\") pod \"35f90c62-8793-4bcc-8b06-9b0b710776d7\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.602038 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-scripts\") pod \"35f90c62-8793-4bcc-8b06-9b0b710776d7\" (UID: \"35f90c62-8793-4bcc-8b06-9b0b710776d7\") " Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.606430 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35f90c62-8793-4bcc-8b06-9b0b710776d7-logs" (OuterVolumeSpecName: "logs") pod "35f90c62-8793-4bcc-8b06-9b0b710776d7" (UID: "35f90c62-8793-4bcc-8b06-9b0b710776d7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.607732 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f90c62-8793-4bcc-8b06-9b0b710776d7-kube-api-access-5jh6n" (OuterVolumeSpecName: "kube-api-access-5jh6n") pod "35f90c62-8793-4bcc-8b06-9b0b710776d7" (UID: "35f90c62-8793-4bcc-8b06-9b0b710776d7"). InnerVolumeSpecName "kube-api-access-5jh6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.614254 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-scripts" (OuterVolumeSpecName: "scripts") pod "35f90c62-8793-4bcc-8b06-9b0b710776d7" (UID: "35f90c62-8793-4bcc-8b06-9b0b710776d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.639490 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35f90c62-8793-4bcc-8b06-9b0b710776d7" (UID: "35f90c62-8793-4bcc-8b06-9b0b710776d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.641007 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-config-data" (OuterVolumeSpecName: "config-data") pod "35f90c62-8793-4bcc-8b06-9b0b710776d7" (UID: "35f90c62-8793-4bcc-8b06-9b0b710776d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.704815 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jh6n\" (UniqueName: \"kubernetes.io/projected/35f90c62-8793-4bcc-8b06-9b0b710776d7-kube-api-access-5jh6n\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.704845 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35f90c62-8793-4bcc-8b06-9b0b710776d7-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.704858 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.704870 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.704881 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35f90c62-8793-4bcc-8b06-9b0b710776d7-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.901620 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5kmr4" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.902125 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5kmr4" event={"ID":"35f90c62-8793-4bcc-8b06-9b0b710776d7","Type":"ContainerDied","Data":"251c487618195f7eea77ef6f9b8ea8c2350247ebbc532a78068b6d2fe87f11c3"} Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.902165 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251c487618195f7eea77ef6f9b8ea8c2350247ebbc532a78068b6d2fe87f11c3" Feb 16 11:26:04 crc kubenswrapper[4797]: I0216 11:26:04.904085 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e45357a0-d18d-4114-8598-5fa948443f32","Type":"ContainerStarted","Data":"5b60fff64e28041ae5aa0bd5d3323dda29282390c638addd99baa6992dea0137"} Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.036611 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.046402 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5dfb479d6b-r2spn"] Feb 16 11:26:05 crc kubenswrapper[4797]: E0216 11:26:05.046824 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f90c62-8793-4bcc-8b06-9b0b710776d7" containerName="placement-db-sync" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.046837 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f90c62-8793-4bcc-8b06-9b0b710776d7" containerName="placement-db-sync" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.047016 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f90c62-8793-4bcc-8b06-9b0b710776d7" containerName="placement-db-sync" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.048004 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.051941 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rhw9s" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.052068 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.060163 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.060415 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.060561 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.105559 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5dfb479d6b-r2spn"] Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.112498 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm7xg\" (UniqueName: \"kubernetes.io/projected/c101b1b6-0ac8-4bfb-84ad-2620693178a4-kube-api-access-nm7xg\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.112847 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c101b1b6-0ac8-4bfb-84ad-2620693178a4-logs\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.113258 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-scripts\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.113408 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-combined-ca-bundle\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.113452 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-internal-tls-certs\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.113674 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-config-data\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.114721 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-public-tls-certs\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.216867 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-scripts\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.216937 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-combined-ca-bundle\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.216960 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-internal-tls-certs\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.217008 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-config-data\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.217028 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-public-tls-certs\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.217057 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm7xg\" (UniqueName: \"kubernetes.io/projected/c101b1b6-0ac8-4bfb-84ad-2620693178a4-kube-api-access-nm7xg\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.217117 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c101b1b6-0ac8-4bfb-84ad-2620693178a4-logs\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.217487 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c101b1b6-0ac8-4bfb-84ad-2620693178a4-logs\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.221230 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-internal-tls-certs\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.221433 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-scripts\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.223287 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-config-data\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.223436 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-public-tls-certs\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.223598 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-combined-ca-bundle\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.233771 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm7xg\" (UniqueName: \"kubernetes.io/projected/c101b1b6-0ac8-4bfb-84ad-2620693178a4-kube-api-access-nm7xg\") pod \"placement-5dfb479d6b-r2spn\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.390040 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.490687 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-z8bpc" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.493335 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.626495 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-fernet-keys\") pod \"e3129f86-1462-4e40-8695-e4ae737ebf5f\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.627818 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-combined-ca-bundle\") pod \"e3129f86-1462-4e40-8695-e4ae737ebf5f\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.627893 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-scripts\") pod \"e3129f86-1462-4e40-8695-e4ae737ebf5f\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.627912 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fea779-c008-4fda-b2d0-e3201f7dfaed-combined-ca-bundle\") pod \"24fea779-c008-4fda-b2d0-e3201f7dfaed\" (UID: \"24fea779-c008-4fda-b2d0-e3201f7dfaed\") " Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.627950 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24fea779-c008-4fda-b2d0-e3201f7dfaed-db-sync-config-data\") pod \"24fea779-c008-4fda-b2d0-e3201f7dfaed\" (UID: \"24fea779-c008-4fda-b2d0-e3201f7dfaed\") " Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.627981 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqkpw\" (UniqueName: \"kubernetes.io/projected/e3129f86-1462-4e40-8695-e4ae737ebf5f-kube-api-access-hqkpw\") pod \"e3129f86-1462-4e40-8695-e4ae737ebf5f\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.628036 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-credential-keys\") pod \"e3129f86-1462-4e40-8695-e4ae737ebf5f\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.628070 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-config-data\") pod \"e3129f86-1462-4e40-8695-e4ae737ebf5f\" (UID: \"e3129f86-1462-4e40-8695-e4ae737ebf5f\") " Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.628104 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7hcr\" (UniqueName: \"kubernetes.io/projected/24fea779-c008-4fda-b2d0-e3201f7dfaed-kube-api-access-f7hcr\") pod \"24fea779-c008-4fda-b2d0-e3201f7dfaed\" (UID: \"24fea779-c008-4fda-b2d0-e3201f7dfaed\") " Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.633618 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e3129f86-1462-4e40-8695-e4ae737ebf5f" (UID: "e3129f86-1462-4e40-8695-e4ae737ebf5f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.635292 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24fea779-c008-4fda-b2d0-e3201f7dfaed-kube-api-access-f7hcr" (OuterVolumeSpecName: "kube-api-access-f7hcr") pod "24fea779-c008-4fda-b2d0-e3201f7dfaed" (UID: "24fea779-c008-4fda-b2d0-e3201f7dfaed"). InnerVolumeSpecName "kube-api-access-f7hcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.636849 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3129f86-1462-4e40-8695-e4ae737ebf5f-kube-api-access-hqkpw" (OuterVolumeSpecName: "kube-api-access-hqkpw") pod "e3129f86-1462-4e40-8695-e4ae737ebf5f" (UID: "e3129f86-1462-4e40-8695-e4ae737ebf5f"). InnerVolumeSpecName "kube-api-access-hqkpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.644174 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fea779-c008-4fda-b2d0-e3201f7dfaed-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "24fea779-c008-4fda-b2d0-e3201f7dfaed" (UID: "24fea779-c008-4fda-b2d0-e3201f7dfaed"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.644999 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e3129f86-1462-4e40-8695-e4ae737ebf5f" (UID: "e3129f86-1462-4e40-8695-e4ae737ebf5f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.652814 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-scripts" (OuterVolumeSpecName: "scripts") pod "e3129f86-1462-4e40-8695-e4ae737ebf5f" (UID: "e3129f86-1462-4e40-8695-e4ae737ebf5f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.662745 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3129f86-1462-4e40-8695-e4ae737ebf5f" (UID: "e3129f86-1462-4e40-8695-e4ae737ebf5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.666704 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-config-data" (OuterVolumeSpecName: "config-data") pod "e3129f86-1462-4e40-8695-e4ae737ebf5f" (UID: "e3129f86-1462-4e40-8695-e4ae737ebf5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.687829 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24fea779-c008-4fda-b2d0-e3201f7dfaed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24fea779-c008-4fda-b2d0-e3201f7dfaed" (UID: "24fea779-c008-4fda-b2d0-e3201f7dfaed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.730806 4797 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.730843 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.730853 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.730863 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24fea779-c008-4fda-b2d0-e3201f7dfaed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.730872 4797 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/24fea779-c008-4fda-b2d0-e3201f7dfaed-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.730880 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqkpw\" (UniqueName: \"kubernetes.io/projected/e3129f86-1462-4e40-8695-e4ae737ebf5f-kube-api-access-hqkpw\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.730890 4797 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.730900 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3129f86-1462-4e40-8695-e4ae737ebf5f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.730909 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7hcr\" (UniqueName: \"kubernetes.io/projected/24fea779-c008-4fda-b2d0-e3201f7dfaed-kube-api-access-f7hcr\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.919391 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-z8bpc" event={"ID":"24fea779-c008-4fda-b2d0-e3201f7dfaed","Type":"ContainerDied","Data":"b1a2ca2734d2b3a901fc5d86d972c45991e1e0d9f568bf2040fd8936e1869340"} Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.919906 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1a2ca2734d2b3a901fc5d86d972c45991e1e0d9f568bf2040fd8936e1869340" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.919651 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-z8bpc" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.924776 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7692da44-8fff-4c27-8069-4278620f1d55","Type":"ContainerStarted","Data":"0b44b44e4ec5a8f588432c89ba3976eba18143ce5194edf83079a6351b89e0a1"} Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.924817 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7692da44-8fff-4c27-8069-4278620f1d55","Type":"ContainerStarted","Data":"c3f768c943addb60d05706396f621639bc97cf8f865ea12f6dcce1716a67e8af"} Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.926816 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e45357a0-d18d-4114-8598-5fa948443f32","Type":"ContainerStarted","Data":"51db9dc1b0d260bf1e92f8a916482c8455108baa9c2c5abd2ff62f98e5830b87"} Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.928849 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4xnc7" event={"ID":"e3129f86-1462-4e40-8695-e4ae737ebf5f","Type":"ContainerDied","Data":"0f52b435b0e6a0bc035ec9957bafb004a2bab390d16d027302c6c7e08f262d00"} Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.928902 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f52b435b0e6a0bc035ec9957bafb004a2bab390d16d027302c6c7e08f262d00" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.928978 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4xnc7" Feb 16 11:26:05 crc kubenswrapper[4797]: I0216 11:26:05.976502 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5dfb479d6b-r2spn"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.055203 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-99c64f77c-dxwz8"] Feb 16 11:26:06 crc kubenswrapper[4797]: E0216 11:26:06.055718 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24fea779-c008-4fda-b2d0-e3201f7dfaed" containerName="barbican-db-sync" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.055741 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="24fea779-c008-4fda-b2d0-e3201f7dfaed" containerName="barbican-db-sync" Feb 16 11:26:06 crc kubenswrapper[4797]: E0216 11:26:06.055763 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3129f86-1462-4e40-8695-e4ae737ebf5f" containerName="keystone-bootstrap" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.055771 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3129f86-1462-4e40-8695-e4ae737ebf5f" containerName="keystone-bootstrap" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.056027 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3129f86-1462-4e40-8695-e4ae737ebf5f" containerName="keystone-bootstrap" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.056056 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="24fea779-c008-4fda-b2d0-e3201f7dfaed" containerName="barbican-db-sync" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.057832 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.064903 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.065811 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.066034 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.069987 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-kpt68" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.070006 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.072408 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.094363 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-99c64f77c-dxwz8"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.156787 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-config-data\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.157253 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-combined-ca-bundle\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.157303 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b624x\" (UniqueName: \"kubernetes.io/projected/94fb19b8-1690-4768-97cd-e918e0f54862-kube-api-access-b624x\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.157333 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-fernet-keys\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.157378 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-public-tls-certs\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.157403 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-internal-tls-certs\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.157465 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-scripts\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.157497 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-credential-keys\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.174060 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7d5b65d687-8mk7v"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.176164 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.184104 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.184134 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xvcw6" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.184292 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.207637 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d5b65d687-8mk7v"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.223094 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-8697c4c9db-m5gfj"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.225145 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.232829 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.236803 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8697c4c9db-m5gfj"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.259625 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-config-data\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.259706 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d426800-6d74-4ef8-a726-a0edc1e0aadf-logs\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.259751 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-combined-ca-bundle\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.259791 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-config-data-custom\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.259815 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b624x\" (UniqueName: \"kubernetes.io/projected/94fb19b8-1690-4768-97cd-e918e0f54862-kube-api-access-b624x\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.259842 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-fernet-keys\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.259874 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-config-data\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.259893 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-internal-tls-certs\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.259909 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-public-tls-certs\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.259924 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-combined-ca-bundle\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.259950 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sbx4\" (UniqueName: \"kubernetes.io/projected/3d426800-6d74-4ef8-a726-a0edc1e0aadf-kube-api-access-4sbx4\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.259989 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-scripts\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.260008 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-credential-keys\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.269227 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-fernet-keys\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.271792 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gg8pv"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.271854 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-credential-keys\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.272004 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" podUID="fff75593-1e2b-47c3-8219-2105ebaca44d" containerName="dnsmasq-dns" containerID="cri-o://ec0e0d40a258184eb2adec76a9dc0ce4f6800bc623248761ef6aa27c9d6c9635" gracePeriod=10 Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.272017 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-internal-tls-certs\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.272523 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-config-data\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.275165 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-public-tls-certs\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.277910 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-scripts\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.288756 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb19b8-1690-4768-97cd-e918e0f54862-combined-ca-bundle\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.326175 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b624x\" (UniqueName: \"kubernetes.io/projected/94fb19b8-1690-4768-97cd-e918e0f54862-kube-api-access-b624x\") pod \"keystone-99c64f77c-dxwz8\" (UID: \"94fb19b8-1690-4768-97cd-e918e0f54862\") " pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.342893 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-gl82w"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.345419 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.361650 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-combined-ca-bundle\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.361705 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sbx4\" (UniqueName: \"kubernetes.io/projected/3d426800-6d74-4ef8-a726-a0edc1e0aadf-kube-api-access-4sbx4\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.361736 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r988g\" (UniqueName: \"kubernetes.io/projected/6549c454-fd65-4edf-806e-bee17fdd8e4f-kube-api-access-r988g\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.361787 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-config-data\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.361813 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-config-data-custom\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.361865 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d426800-6d74-4ef8-a726-a0edc1e0aadf-logs\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.361897 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-combined-ca-bundle\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.361935 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-config-data-custom\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.361958 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6549c454-fd65-4edf-806e-bee17fdd8e4f-logs\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.362001 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-config-data\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.366733 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d426800-6d74-4ef8-a726-a0edc1e0aadf-logs\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.370726 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-config-data\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.378261 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.383675 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-gl82w"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.392155 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-config-data-custom\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.402120 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sbx4\" (UniqueName: \"kubernetes.io/projected/3d426800-6d74-4ef8-a726-a0edc1e0aadf-kube-api-access-4sbx4\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.407205 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-combined-ca-bundle\") pod \"barbican-worker-7d5b65d687-8mk7v\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.420600 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b956c448-lx9qt"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.422131 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.427192 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.465692 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-dns-svc\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.465751 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-combined-ca-bundle\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.465785 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-config\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.465834 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32fb1a8e-1c20-4c94-9874-86adb1a9314b-logs\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.465859 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6549c454-fd65-4edf-806e-bee17fdd8e4f-logs\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.465928 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r988g\" (UniqueName: \"kubernetes.io/projected/6549c454-fd65-4edf-806e-bee17fdd8e4f-kube-api-access-r988g\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.465963 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mht4\" (UniqueName: \"kubernetes.io/projected/b3eaa18d-dc3e-4499-b37e-58ff7449745f-kube-api-access-2mht4\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.465985 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-config-data\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.466013 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-config-data-custom\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.466045 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b5dn\" (UniqueName: \"kubernetes.io/projected/32fb1a8e-1c20-4c94-9874-86adb1a9314b-kube-api-access-8b5dn\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.466085 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-config-data\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.466108 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.466134 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-combined-ca-bundle\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.466164 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-config-data-custom\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.466187 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.466219 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.466881 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6549c454-fd65-4edf-806e-bee17fdd8e4f-logs\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.468708 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b956c448-lx9qt"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.502735 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-combined-ca-bundle\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.502872 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r988g\" (UniqueName: \"kubernetes.io/projected/6549c454-fd65-4edf-806e-bee17fdd8e4f-kube-api-access-r988g\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.505387 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-config-data\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.508333 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-config-data-custom\") pod \"barbican-keystone-listener-8697c4c9db-m5gfj\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.522163 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.555941 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.568247 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32fb1a8e-1c20-4c94-9874-86adb1a9314b-logs\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.568555 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mht4\" (UniqueName: \"kubernetes.io/projected/b3eaa18d-dc3e-4499-b37e-58ff7449745f-kube-api-access-2mht4\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.568611 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-config-data\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.568642 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-config-data-custom\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.568693 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b5dn\" (UniqueName: \"kubernetes.io/projected/32fb1a8e-1c20-4c94-9874-86adb1a9314b-kube-api-access-8b5dn\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.568724 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.568783 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-combined-ca-bundle\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.568806 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.568857 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.568966 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-dns-svc\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.569024 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-config\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.569115 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32fb1a8e-1c20-4c94-9874-86adb1a9314b-logs\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.573324 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-dns-svc\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.576277 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.576874 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-combined-ca-bundle\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.576905 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-config\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.577189 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.595100 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-config-data\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.599415 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-config-data-custom\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.599428 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-cdc59674-z5klt"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.604484 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b5dn\" (UniqueName: \"kubernetes.io/projected/32fb1a8e-1c20-4c94-9874-86adb1a9314b-kube-api-access-8b5dn\") pod \"barbican-api-7b956c448-lx9qt\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.604513 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.606029 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mht4\" (UniqueName: \"kubernetes.io/projected/b3eaa18d-dc3e-4499-b37e-58ff7449745f-kube-api-access-2mht4\") pod \"dnsmasq-dns-85ff748b95-gl82w\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.608424 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.647637 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.660561 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-cdc59674-z5klt"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.668414 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.670622 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-logs\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.670667 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-combined-ca-bundle\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.670718 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c87qk\" (UniqueName: \"kubernetes.io/projected/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-kube-api-access-c87qk\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.670746 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-config-data-custom\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.670809 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-config-data\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.673475 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5cd6bd5769-dzjd4"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.675598 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.726827 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cd6bd5769-dzjd4"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.763693 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-f675df6c4-7jpbc"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.766156 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.772772 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c87qk\" (UniqueName: \"kubernetes.io/projected/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-kube-api-access-c87qk\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.772950 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-config-data-custom\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.773045 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a099a104-659d-41b1-a775-201ce4979384-logs\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.773071 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a099a104-659d-41b1-a775-201ce4979384-config-data-custom\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.773091 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-config-data\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.773220 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-logs\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.773278 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-combined-ca-bundle\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.773313 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a099a104-659d-41b1-a775-201ce4979384-config-data\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.773328 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a099a104-659d-41b1-a775-201ce4979384-combined-ca-bundle\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.773346 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckwxs\" (UniqueName: \"kubernetes.io/projected/a099a104-659d-41b1-a775-201ce4979384-kube-api-access-ckwxs\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.776676 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-logs\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.778674 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-combined-ca-bundle\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.783389 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-config-data-custom\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.791726 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-config-data\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.796216 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f675df6c4-7jpbc"] Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.805227 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c87qk\" (UniqueName: \"kubernetes.io/projected/0ec6277d-293d-47f4-8dc0-d407a4d1bfc8-kube-api-access-c87qk\") pod \"barbican-keystone-listener-cdc59674-z5klt\" (UID: \"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8\") " pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.875315 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-config-data\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.875369 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-config-data-custom\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.875476 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a099a104-659d-41b1-a775-201ce4979384-logs\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.875501 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a099a104-659d-41b1-a775-201ce4979384-config-data-custom\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.875555 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39972561-a4a4-45aa-939d-0c1d194d603a-logs\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.875593 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phspb\" (UniqueName: \"kubernetes.io/projected/39972561-a4a4-45aa-939d-0c1d194d603a-kube-api-access-phspb\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.875620 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-combined-ca-bundle\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.875652 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a099a104-659d-41b1-a775-201ce4979384-config-data\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.875668 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a099a104-659d-41b1-a775-201ce4979384-combined-ca-bundle\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.875710 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckwxs\" (UniqueName: \"kubernetes.io/projected/a099a104-659d-41b1-a775-201ce4979384-kube-api-access-ckwxs\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.876132 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a099a104-659d-41b1-a775-201ce4979384-logs\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.882603 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a099a104-659d-41b1-a775-201ce4979384-combined-ca-bundle\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.882805 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a099a104-659d-41b1-a775-201ce4979384-config-data-custom\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.893618 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a099a104-659d-41b1-a775-201ce4979384-config-data\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.897106 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckwxs\" (UniqueName: \"kubernetes.io/projected/a099a104-659d-41b1-a775-201ce4979384-kube-api-access-ckwxs\") pod \"barbican-worker-5cd6bd5769-dzjd4\" (UID: \"a099a104-659d-41b1-a775-201ce4979384\") " pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.943975 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e45357a0-d18d-4114-8598-5fa948443f32","Type":"ContainerStarted","Data":"2eb882a209ba222ac4a28467a60e6142ecb4307f204e350317ebd67b229d7496"} Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.947061 4797 generic.go:334] "Generic (PLEG): container finished" podID="fff75593-1e2b-47c3-8219-2105ebaca44d" containerID="ec0e0d40a258184eb2adec76a9dc0ce4f6800bc623248761ef6aa27c9d6c9635" exitCode=0 Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.947101 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" event={"ID":"fff75593-1e2b-47c3-8219-2105ebaca44d","Type":"ContainerDied","Data":"ec0e0d40a258184eb2adec76a9dc0ce4f6800bc623248761ef6aa27c9d6c9635"} Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.964481 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.964463127 podStartE2EDuration="3.964463127s" podCreationTimestamp="2026-02-16 11:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:06.960918011 +0000 UTC m=+1161.681102991" watchObservedRunningTime="2026-02-16 11:26:06.964463127 +0000 UTC m=+1161.684648117" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.975969 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-cdc59674-z5klt" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.977151 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-combined-ca-bundle\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.977201 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-config-data\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.977236 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-config-data-custom\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.977368 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39972561-a4a4-45aa-939d-0c1d194d603a-logs\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.977403 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phspb\" (UniqueName: \"kubernetes.io/projected/39972561-a4a4-45aa-939d-0c1d194d603a-kube-api-access-phspb\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.978907 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39972561-a4a4-45aa-939d-0c1d194d603a-logs\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.983432 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-config-data\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.983894 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-config-data-custom\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.987808 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-combined-ca-bundle\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:06 crc kubenswrapper[4797]: I0216 11:26:06.992984 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phspb\" (UniqueName: \"kubernetes.io/projected/39972561-a4a4-45aa-939d-0c1d194d603a-kube-api-access-phspb\") pod \"barbican-api-f675df6c4-7jpbc\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:07 crc kubenswrapper[4797]: I0216 11:26:07.002225 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cd6bd5769-dzjd4" Feb 16 11:26:07 crc kubenswrapper[4797]: I0216 11:26:07.096205 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:08 crc kubenswrapper[4797]: I0216 11:26:08.818052 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b956c448-lx9qt"] Feb 16 11:26:08 crc kubenswrapper[4797]: I0216 11:26:08.858434 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7d9f6f7d6-62qcx"] Feb 16 11:26:08 crc kubenswrapper[4797]: I0216 11:26:08.860708 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:08 crc kubenswrapper[4797]: I0216 11:26:08.863412 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 11:26:08 crc kubenswrapper[4797]: I0216 11:26:08.863415 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 11:26:08 crc kubenswrapper[4797]: I0216 11:26:08.874415 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d9f6f7d6-62qcx"] Feb 16 11:26:08 crc kubenswrapper[4797]: I0216 11:26:08.917326 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-combined-ca-bundle\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:08 crc kubenswrapper[4797]: I0216 11:26:08.917389 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-config-data\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:08 crc kubenswrapper[4797]: I0216 11:26:08.917414 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a00aa91c-5090-4635-b93f-531cc33523b9-logs\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:08 crc kubenswrapper[4797]: I0216 11:26:08.917478 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh2rr\" (UniqueName: \"kubernetes.io/projected/a00aa91c-5090-4635-b93f-531cc33523b9-kube-api-access-lh2rr\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:08 crc kubenswrapper[4797]: I0216 11:26:08.917524 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-public-tls-certs\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:08 crc kubenswrapper[4797]: I0216 11:26:08.917593 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-config-data-custom\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:08 crc kubenswrapper[4797]: I0216 11:26:08.917618 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-internal-tls-certs\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.019811 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-config-data-custom\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.019958 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-internal-tls-certs\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.020011 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-combined-ca-bundle\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.020101 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-config-data\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.020145 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a00aa91c-5090-4635-b93f-531cc33523b9-logs\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.020256 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh2rr\" (UniqueName: \"kubernetes.io/projected/a00aa91c-5090-4635-b93f-531cc33523b9-kube-api-access-lh2rr\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.020323 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-public-tls-certs\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.022306 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a00aa91c-5090-4635-b93f-531cc33523b9-logs\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.026631 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-internal-tls-certs\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.026967 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-config-data\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.027661 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-config-data-custom\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.029022 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-combined-ca-bundle\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.040045 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a00aa91c-5090-4635-b93f-531cc33523b9-public-tls-certs\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.041518 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh2rr\" (UniqueName: \"kubernetes.io/projected/a00aa91c-5090-4635-b93f-531cc33523b9-kube-api-access-lh2rr\") pod \"barbican-api-7d9f6f7d6-62qcx\" (UID: \"a00aa91c-5090-4635-b93f-531cc33523b9\") " pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:09 crc kubenswrapper[4797]: I0216 11:26:09.183625 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:10 crc kubenswrapper[4797]: E0216 11:26:10.118812 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:26:10 crc kubenswrapper[4797]: E0216 11:26:10.119110 4797 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:26:10 crc kubenswrapper[4797]: E0216 11:26:10.119245 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fvxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dhgrw_openstack(895bed8d-c376-47ad-8fa6-3cf0f07399c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 11:26:10 crc kubenswrapper[4797]: E0216 11:26:10.120437 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:26:10 crc kubenswrapper[4797]: W0216 11:26:10.367003 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc101b1b6_0ac8_4bfb_84ad_2620693178a4.slice/crio-d0bc0c70726d980f3ad0e05ae9761c1659a852d626263da2ec3f01a891487015 WatchSource:0}: Error finding container d0bc0c70726d980f3ad0e05ae9761c1659a852d626263da2ec3f01a891487015: Status 404 returned error can't find the container with id d0bc0c70726d980f3ad0e05ae9761c1659a852d626263da2ec3f01a891487015 Feb 16 11:26:10 crc kubenswrapper[4797]: I0216 11:26:10.588418 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:26:10 crc kubenswrapper[4797]: I0216 11:26:10.755701 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-dns-svc\") pod \"fff75593-1e2b-47c3-8219-2105ebaca44d\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " Feb 16 11:26:10 crc kubenswrapper[4797]: I0216 11:26:10.756002 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-ovsdbserver-sb\") pod \"fff75593-1e2b-47c3-8219-2105ebaca44d\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " Feb 16 11:26:10 crc kubenswrapper[4797]: I0216 11:26:10.756048 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-ovsdbserver-nb\") pod \"fff75593-1e2b-47c3-8219-2105ebaca44d\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " Feb 16 11:26:10 crc kubenswrapper[4797]: I0216 11:26:10.756110 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r6km\" (UniqueName: \"kubernetes.io/projected/fff75593-1e2b-47c3-8219-2105ebaca44d-kube-api-access-2r6km\") pod \"fff75593-1e2b-47c3-8219-2105ebaca44d\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " Feb 16 11:26:10 crc kubenswrapper[4797]: I0216 11:26:10.756160 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-dns-swift-storage-0\") pod \"fff75593-1e2b-47c3-8219-2105ebaca44d\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " Feb 16 11:26:10 crc kubenswrapper[4797]: I0216 11:26:10.756203 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-config\") pod \"fff75593-1e2b-47c3-8219-2105ebaca44d\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " Feb 16 11:26:10 crc kubenswrapper[4797]: I0216 11:26:10.782996 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fff75593-1e2b-47c3-8219-2105ebaca44d-kube-api-access-2r6km" (OuterVolumeSpecName: "kube-api-access-2r6km") pod "fff75593-1e2b-47c3-8219-2105ebaca44d" (UID: "fff75593-1e2b-47c3-8219-2105ebaca44d"). InnerVolumeSpecName "kube-api-access-2r6km". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:10 crc kubenswrapper[4797]: I0216 11:26:10.866092 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r6km\" (UniqueName: \"kubernetes.io/projected/fff75593-1e2b-47c3-8219-2105ebaca44d-kube-api-access-2r6km\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:10 crc kubenswrapper[4797]: I0216 11:26:10.951355 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fff75593-1e2b-47c3-8219-2105ebaca44d" (UID: "fff75593-1e2b-47c3-8219-2105ebaca44d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:10 crc kubenswrapper[4797]: I0216 11:26:10.968025 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:10 crc kubenswrapper[4797]: I0216 11:26:10.974041 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-99c64f77c-dxwz8"] Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.014106 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fff75593-1e2b-47c3-8219-2105ebaca44d" (UID: "fff75593-1e2b-47c3-8219-2105ebaca44d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.032237 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-99c64f77c-dxwz8" event={"ID":"94fb19b8-1690-4768-97cd-e918e0f54862","Type":"ContainerStarted","Data":"703c4d5b56f08da5c1172007bc1ab6af7ca7d5d1a2a25a10a7beb466095780af"} Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.047178 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" event={"ID":"fff75593-1e2b-47c3-8219-2105ebaca44d","Type":"ContainerDied","Data":"62d812a303c5dae2edde8b085d7ee9cf8ae1b1ff7eb74ecd5755dee333158e14"} Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.047225 4797 scope.go:117] "RemoveContainer" containerID="ec0e0d40a258184eb2adec76a9dc0ce4f6800bc623248761ef6aa27c9d6c9635" Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.047335 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.053254 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dfb479d6b-r2spn" event={"ID":"c101b1b6-0ac8-4bfb-84ad-2620693178a4","Type":"ContainerStarted","Data":"d0bc0c70726d980f3ad0e05ae9761c1659a852d626263da2ec3f01a891487015"} Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.069861 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-config" (OuterVolumeSpecName: "config") pod "fff75593-1e2b-47c3-8219-2105ebaca44d" (UID: "fff75593-1e2b-47c3-8219-2105ebaca44d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.070278 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-config\") pod \"fff75593-1e2b-47c3-8219-2105ebaca44d\" (UID: \"fff75593-1e2b-47c3-8219-2105ebaca44d\") " Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.070882 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:11 crc kubenswrapper[4797]: W0216 11:26:11.070944 4797 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fff75593-1e2b-47c3-8219-2105ebaca44d/volumes/kubernetes.io~configmap/config Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.070953 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-config" (OuterVolumeSpecName: "config") pod "fff75593-1e2b-47c3-8219-2105ebaca44d" (UID: "fff75593-1e2b-47c3-8219-2105ebaca44d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.147885 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fff75593-1e2b-47c3-8219-2105ebaca44d" (UID: "fff75593-1e2b-47c3-8219-2105ebaca44d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.167204 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fff75593-1e2b-47c3-8219-2105ebaca44d" (UID: "fff75593-1e2b-47c3-8219-2105ebaca44d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.173592 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.173623 4797 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.173637 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fff75593-1e2b-47c3-8219-2105ebaca44d-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.204919 4797 scope.go:117] "RemoveContainer" containerID="7f53a00fe26c22b5db44fa88ad192b7c63ecb6b675d1fb118f6e07121842436a" Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.290471 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-8697c4c9db-m5gfj"] Feb 16 11:26:11 crc kubenswrapper[4797]: W0216 11:26:11.292474 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d426800_6d74_4ef8_a726_a0edc1e0aadf.slice/crio-c6a0741cd387fcb7a224e71462c5857776eb085974db7a6832f7821fafef11d5 WatchSource:0}: Error finding container c6a0741cd387fcb7a224e71462c5857776eb085974db7a6832f7821fafef11d5: Status 404 returned error can't find the container with id c6a0741cd387fcb7a224e71462c5857776eb085974db7a6832f7821fafef11d5 Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.311219 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7d5b65d687-8mk7v"] Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.496742 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cd6bd5769-dzjd4"] Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.514639 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gg8pv"] Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.528837 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-gg8pv"] Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.703602 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:26:11 crc kubenswrapper[4797]: I0216 11:26:11.703655 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:26:12 crc kubenswrapper[4797]: I0216 11:26:12.073935 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fff75593-1e2b-47c3-8219-2105ebaca44d" path="/var/lib/kubelet/pods/fff75593-1e2b-47c3-8219-2105ebaca44d/volumes" Feb 16 11:26:12 crc kubenswrapper[4797]: I0216 11:26:12.112909 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b956c448-lx9qt"] Feb 16 11:26:12 crc kubenswrapper[4797]: I0216 11:26:12.124726 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-cdc59674-z5klt"] Feb 16 11:26:12 crc kubenswrapper[4797]: I0216 11:26:12.130347 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd6bd5769-dzjd4" event={"ID":"a099a104-659d-41b1-a775-201ce4979384","Type":"ContainerStarted","Data":"55e361b399007ff92aad9123b0915a61ac7a6a46860c1c4b7efbcc93eb282a40"} Feb 16 11:26:12 crc kubenswrapper[4797]: I0216 11:26:12.138016 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-gl82w"] Feb 16 11:26:12 crc kubenswrapper[4797]: I0216 11:26:12.138872 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5b65d687-8mk7v" event={"ID":"3d426800-6d74-4ef8-a726-a0edc1e0aadf","Type":"ContainerStarted","Data":"c6a0741cd387fcb7a224e71462c5857776eb085974db7a6832f7821fafef11d5"} Feb 16 11:26:12 crc kubenswrapper[4797]: I0216 11:26:12.145618 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f675df6c4-7jpbc"] Feb 16 11:26:12 crc kubenswrapper[4797]: I0216 11:26:12.178936 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0649e0a-7249-45bd-ad8f-6c7e61456322","Type":"ContainerStarted","Data":"8ec571c2b817b180635e617a198eb9781578ffc63b6f1112199cd05770192a1f"} Feb 16 11:26:12 crc kubenswrapper[4797]: I0216 11:26:12.201469 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" event={"ID":"6549c454-fd65-4edf-806e-bee17fdd8e4f","Type":"ContainerStarted","Data":"7638de4e78c16fd93469b9c754694f8cc6d497efff8e0404eb2136a692105512"} Feb 16 11:26:12 crc kubenswrapper[4797]: I0216 11:26:12.225334 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dfb479d6b-r2spn" event={"ID":"c101b1b6-0ac8-4bfb-84ad-2620693178a4","Type":"ContainerStarted","Data":"382dd506b090917fad7cf2c7b97e73b4d1221387ae29331570b06f1eb8599925"} Feb 16 11:26:12 crc kubenswrapper[4797]: I0216 11:26:12.280296 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7d9f6f7d6-62qcx"] Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.243885 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-99c64f77c-dxwz8" event={"ID":"94fb19b8-1690-4768-97cd-e918e0f54862","Type":"ContainerStarted","Data":"c35adb7ff24b5e3d4f12122ed7e44d479e93b5b422c5674ee1f34f1b79876b0e"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.244186 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.250603 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dfb479d6b-r2spn" event={"ID":"c101b1b6-0ac8-4bfb-84ad-2620693178a4","Type":"ContainerStarted","Data":"18bfb05f115df428b9d1bc9e732a206b583a5382ef1629e3551e6399d32440e8"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.250668 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.250682 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.252830 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b956c448-lx9qt" event={"ID":"32fb1a8e-1c20-4c94-9874-86adb1a9314b","Type":"ContainerStarted","Data":"b6de0a588582480c5219dd91cf6acc90a0282df54f79960f4f60ce66d82cea28"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.252875 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b956c448-lx9qt" event={"ID":"32fb1a8e-1c20-4c94-9874-86adb1a9314b","Type":"ContainerStarted","Data":"5468119f92c7bba7326b05a5bbfa05240d27634a6c7e72b1d5aee0d13282df79"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.252894 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b956c448-lx9qt" event={"ID":"32fb1a8e-1c20-4c94-9874-86adb1a9314b","Type":"ContainerStarted","Data":"09b5eb705851a0ea2e0c22d1ef7476dd886ca6b1e818ad15df33aac1a627fd7c"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.252909 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.252921 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.252887 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b956c448-lx9qt" podUID="32fb1a8e-1c20-4c94-9874-86adb1a9314b" containerName="barbican-api-log" containerID="cri-o://5468119f92c7bba7326b05a5bbfa05240d27634a6c7e72b1d5aee0d13282df79" gracePeriod=30 Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.252920 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b956c448-lx9qt" podUID="32fb1a8e-1c20-4c94-9874-86adb1a9314b" containerName="barbican-api" containerID="cri-o://b6de0a588582480c5219dd91cf6acc90a0282df54f79960f4f60ce66d82cea28" gracePeriod=30 Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.264797 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f675df6c4-7jpbc" event={"ID":"39972561-a4a4-45aa-939d-0c1d194d603a","Type":"ContainerStarted","Data":"e7e6be99223fbfa79b49bfe727cb1ac622860db5114a1ca8083ad0840241172e"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.264835 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f675df6c4-7jpbc" event={"ID":"39972561-a4a4-45aa-939d-0c1d194d603a","Type":"ContainerStarted","Data":"6647628e009feed879b41e1f8aa79b296a228de3b27dae1695adf9d93b99397a"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.264847 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f675df6c4-7jpbc" event={"ID":"39972561-a4a4-45aa-939d-0c1d194d603a","Type":"ContainerStarted","Data":"6943a22f6eaa2c62aed8e583b0df306fa341db6659f6d8a9ed061f990190992a"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.267686 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.267732 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.270443 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7692da44-8fff-4c27-8069-4278620f1d55","Type":"ContainerStarted","Data":"068392753fe07fe9efc2d75cfdef54388401fa6ff9d164599a854dbaeb33fb60"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.271492 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-99c64f77c-dxwz8" podStartSLOduration=7.271474107 podStartE2EDuration="7.271474107s" podCreationTimestamp="2026-02-16 11:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:13.261852403 +0000 UTC m=+1167.982037403" watchObservedRunningTime="2026-02-16 11:26:13.271474107 +0000 UTC m=+1167.991659087" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.278755 4797 generic.go:334] "Generic (PLEG): container finished" podID="b3eaa18d-dc3e-4499-b37e-58ff7449745f" containerID="5e1c07b2c7d8702b0cd7274d70f562245d35f11aa92c84686475cca68ed28f46" exitCode=0 Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.278793 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" event={"ID":"b3eaa18d-dc3e-4499-b37e-58ff7449745f","Type":"ContainerDied","Data":"5e1c07b2c7d8702b0cd7274d70f562245d35f11aa92c84686475cca68ed28f46"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.278868 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" event={"ID":"b3eaa18d-dc3e-4499-b37e-58ff7449745f","Type":"ContainerStarted","Data":"40c4eeeb11b9f8092b9bb0790192211432cd1084a5bb71f758478d287f8f4770"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.281188 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-cdc59674-z5klt" event={"ID":"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8","Type":"ContainerStarted","Data":"0a5fa176d5ae54ac2e9e2c6fe61e7c673ec9f3f5a016dfeb49d24fabd3740318"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.287932 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d9f6f7d6-62qcx" event={"ID":"a00aa91c-5090-4635-b93f-531cc33523b9","Type":"ContainerStarted","Data":"051c5dbb24917d2386eb5749738ce2f21072c844a810a5aca4fe35fa5cb28910"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.287973 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d9f6f7d6-62qcx" event={"ID":"a00aa91c-5090-4635-b93f-531cc33523b9","Type":"ContainerStarted","Data":"8b582ed8648458ddc6be3944645f707a77838fb9ba59527c4a2ae4c193e9b67d"} Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.295565 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5dfb479d6b-r2spn" podStartSLOduration=8.295549795 podStartE2EDuration="8.295549795s" podCreationTimestamp="2026-02-16 11:26:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:13.285899591 +0000 UTC m=+1168.006084581" watchObservedRunningTime="2026-02-16 11:26:13.295549795 +0000 UTC m=+1168.015734775" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.318352 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b956c448-lx9qt" podStartSLOduration=7.318336937 podStartE2EDuration="7.318336937s" podCreationTimestamp="2026-02-16 11:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:13.307731518 +0000 UTC m=+1168.027916498" watchObservedRunningTime="2026-02-16 11:26:13.318336937 +0000 UTC m=+1168.038521907" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.356960 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=10.356932593 podStartE2EDuration="10.356932593s" podCreationTimestamp="2026-02-16 11:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:13.338849039 +0000 UTC m=+1168.059034019" watchObservedRunningTime="2026-02-16 11:26:13.356932593 +0000 UTC m=+1168.077117573" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.388833 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-f675df6c4-7jpbc" podStartSLOduration=7.388815545 podStartE2EDuration="7.388815545s" podCreationTimestamp="2026-02-16 11:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:13.364599603 +0000 UTC m=+1168.084784583" watchObservedRunningTime="2026-02-16 11:26:13.388815545 +0000 UTC m=+1168.109000525" Feb 16 11:26:13 crc kubenswrapper[4797]: E0216 11:26:13.406571 4797 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32fb1a8e_1c20_4c94_9874_86adb1a9314b.slice/crio-5468119f92c7bba7326b05a5bbfa05240d27634a6c7e72b1d5aee0d13282df79.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32fb1a8e_1c20_4c94_9874_86adb1a9314b.slice/crio-conmon-5468119f92c7bba7326b05a5bbfa05240d27634a6c7e72b1d5aee0d13282df79.scope\": RecentStats: unable to find data in memory cache]" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.581050 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.581121 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.645052 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.645455 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:13 crc kubenswrapper[4797]: I0216 11:26:13.949433 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-gg8pv" podUID="fff75593-1e2b-47c3-8219-2105ebaca44d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.167:5353: i/o timeout" Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.304663 4797 generic.go:334] "Generic (PLEG): container finished" podID="32fb1a8e-1c20-4c94-9874-86adb1a9314b" containerID="b6de0a588582480c5219dd91cf6acc90a0282df54f79960f4f60ce66d82cea28" exitCode=0 Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.304969 4797 generic.go:334] "Generic (PLEG): container finished" podID="32fb1a8e-1c20-4c94-9874-86adb1a9314b" containerID="5468119f92c7bba7326b05a5bbfa05240d27634a6c7e72b1d5aee0d13282df79" exitCode=143 Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.305350 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b956c448-lx9qt" event={"ID":"32fb1a8e-1c20-4c94-9874-86adb1a9314b","Type":"ContainerDied","Data":"b6de0a588582480c5219dd91cf6acc90a0282df54f79960f4f60ce66d82cea28"} Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.305410 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b956c448-lx9qt" event={"ID":"32fb1a8e-1c20-4c94-9874-86adb1a9314b","Type":"ContainerDied","Data":"5468119f92c7bba7326b05a5bbfa05240d27634a6c7e72b1d5aee0d13282df79"} Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.307128 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.307185 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.413421 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.413485 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.460852 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.470719 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.858323 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.983504 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-combined-ca-bundle\") pod \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.983969 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32fb1a8e-1c20-4c94-9874-86adb1a9314b-logs\") pod \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.984079 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b5dn\" (UniqueName: \"kubernetes.io/projected/32fb1a8e-1c20-4c94-9874-86adb1a9314b-kube-api-access-8b5dn\") pod \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.984167 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-config-data\") pod \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.984244 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-config-data-custom\") pod \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\" (UID: \"32fb1a8e-1c20-4c94-9874-86adb1a9314b\") " Feb 16 11:26:14 crc kubenswrapper[4797]: I0216 11:26:14.995730 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32fb1a8e-1c20-4c94-9874-86adb1a9314b-logs" (OuterVolumeSpecName: "logs") pod "32fb1a8e-1c20-4c94-9874-86adb1a9314b" (UID: "32fb1a8e-1c20-4c94-9874-86adb1a9314b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.077699 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32fb1a8e-1c20-4c94-9874-86adb1a9314b-kube-api-access-8b5dn" (OuterVolumeSpecName: "kube-api-access-8b5dn") pod "32fb1a8e-1c20-4c94-9874-86adb1a9314b" (UID: "32fb1a8e-1c20-4c94-9874-86adb1a9314b"). InnerVolumeSpecName "kube-api-access-8b5dn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.083774 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "32fb1a8e-1c20-4c94-9874-86adb1a9314b" (UID: "32fb1a8e-1c20-4c94-9874-86adb1a9314b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.090435 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32fb1a8e-1c20-4c94-9874-86adb1a9314b-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.090477 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b5dn\" (UniqueName: \"kubernetes.io/projected/32fb1a8e-1c20-4c94-9874-86adb1a9314b-kube-api-access-8b5dn\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.090487 4797 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.328020 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b956c448-lx9qt" event={"ID":"32fb1a8e-1c20-4c94-9874-86adb1a9314b","Type":"ContainerDied","Data":"09b5eb705851a0ea2e0c22d1ef7476dd886ca6b1e818ad15df33aac1a627fd7c"} Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.328051 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b956c448-lx9qt" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.328085 4797 scope.go:117] "RemoveContainer" containerID="b6de0a588582480c5219dd91cf6acc90a0282df54f79960f4f60ce66d82cea28" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.331804 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-cdc59674-z5klt" event={"ID":"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8","Type":"ContainerStarted","Data":"b98d3a1bee0dcdc343e0bd85b7b9b2fd5b42f110b754c15d9d8aad43372f04ef"} Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.333758 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" event={"ID":"6549c454-fd65-4edf-806e-bee17fdd8e4f","Type":"ContainerStarted","Data":"432ad19d9c61c9997e08d1abe339f473341881a839669c46d0ef2182a0ace6b6"} Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.334933 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.334960 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.383919 4797 scope.go:117] "RemoveContainer" containerID="5468119f92c7bba7326b05a5bbfa05240d27634a6c7e72b1d5aee0d13282df79" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.405399 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32fb1a8e-1c20-4c94-9874-86adb1a9314b" (UID: "32fb1a8e-1c20-4c94-9874-86adb1a9314b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.497400 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.498882 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-config-data" (OuterVolumeSpecName: "config-data") pod "32fb1a8e-1c20-4c94-9874-86adb1a9314b" (UID: "32fb1a8e-1c20-4c94-9874-86adb1a9314b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.598802 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32fb1a8e-1c20-4c94-9874-86adb1a9314b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.666378 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b956c448-lx9qt"] Feb 16 11:26:15 crc kubenswrapper[4797]: I0216 11:26:15.675713 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7b956c448-lx9qt"] Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.013131 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32fb1a8e-1c20-4c94-9874-86adb1a9314b" path="/var/lib/kubelet/pods/32fb1a8e-1c20-4c94-9874-86adb1a9314b/volumes" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.349766 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5b65d687-8mk7v" event={"ID":"3d426800-6d74-4ef8-a726-a0edc1e0aadf","Type":"ContainerStarted","Data":"c7d76c7eb00eb8b7063dc4d1b2dc1cecf1ff880cdc6cc42b8453a3ec83c30155"} Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.349807 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5b65d687-8mk7v" event={"ID":"3d426800-6d74-4ef8-a726-a0edc1e0aadf","Type":"ContainerStarted","Data":"478dff8bf8ee7182f35f6c9e286f2ff374c8319064a40e8fc2708d004129d1e6"} Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.355087 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" event={"ID":"b3eaa18d-dc3e-4499-b37e-58ff7449745f","Type":"ContainerStarted","Data":"9f24964d47a93399207c03eb66d81768a0602b02efaf4509bb0b0e0c4e9025dd"} Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.355800 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.358561 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2dfc7" event={"ID":"062948d0-fd09-4e11-904d-a346a430ee4f","Type":"ContainerStarted","Data":"ead106d265fee7368ebd34864028cdba49a23285426e71c60e8aad6dc35e7e9f"} Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.361230 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-cdc59674-z5klt" event={"ID":"0ec6277d-293d-47f4-8dc0-d407a4d1bfc8","Type":"ContainerStarted","Data":"ca0b8bbb96efc8c28ea3d23b766a92fab489f5319e5e54a77f92634c6d254e6f"} Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.363820 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7d9f6f7d6-62qcx" event={"ID":"a00aa91c-5090-4635-b93f-531cc33523b9","Type":"ContainerStarted","Data":"1af04c5d6978839fcef9ef19b187af7482729344a4e8cc4af666ce368a2ec98d"} Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.364256 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.364276 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.370439 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" event={"ID":"6549c454-fd65-4edf-806e-bee17fdd8e4f","Type":"ContainerStarted","Data":"4a31deee3984134bececb730b72e520bc3e28d37fbc3418159159bea308193c1"} Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.379490 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7d5b65d687-8mk7v" podStartSLOduration=7.05649927 podStartE2EDuration="10.379470657s" podCreationTimestamp="2026-02-16 11:26:06 +0000 UTC" firstStartedPulling="2026-02-16 11:26:11.296361859 +0000 UTC m=+1166.016546839" lastFinishedPulling="2026-02-16 11:26:14.619333246 +0000 UTC m=+1169.339518226" observedRunningTime="2026-02-16 11:26:16.376616049 +0000 UTC m=+1171.096801059" watchObservedRunningTime="2026-02-16 11:26:16.379470657 +0000 UTC m=+1171.099655647" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.389221 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd6bd5769-dzjd4" event={"ID":"a099a104-659d-41b1-a775-201ce4979384","Type":"ContainerStarted","Data":"adefeefe1cdb3e43aafb196fae1cd4187ad8f99cc380ed0a6f19ef7827f48f3d"} Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.389287 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cd6bd5769-dzjd4" event={"ID":"a099a104-659d-41b1-a775-201ce4979384","Type":"ContainerStarted","Data":"2af81c8f7a4deb9fd805a62d5c3f57196a4dd818a6a7fc86e649ec9bdaacdaec"} Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.395535 4797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.395563 4797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.407911 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7d9f6f7d6-62qcx" podStartSLOduration=8.407888834 podStartE2EDuration="8.407888834s" podCreationTimestamp="2026-02-16 11:26:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:16.399724201 +0000 UTC m=+1171.119909241" watchObservedRunningTime="2026-02-16 11:26:16.407888834 +0000 UTC m=+1171.128073814" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.455890 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" podStartSLOduration=10.455871466 podStartE2EDuration="10.455871466s" podCreationTimestamp="2026-02-16 11:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:16.433212496 +0000 UTC m=+1171.153397466" watchObservedRunningTime="2026-02-16 11:26:16.455871466 +0000 UTC m=+1171.176056446" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.486507 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" podStartSLOduration=7.124892619 podStartE2EDuration="10.486483402s" podCreationTimestamp="2026-02-16 11:26:06 +0000 UTC" firstStartedPulling="2026-02-16 11:26:11.291087265 +0000 UTC m=+1166.011272245" lastFinishedPulling="2026-02-16 11:26:14.652678048 +0000 UTC m=+1169.372863028" observedRunningTime="2026-02-16 11:26:16.478722101 +0000 UTC m=+1171.198907081" watchObservedRunningTime="2026-02-16 11:26:16.486483402 +0000 UTC m=+1171.206668402" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.494638 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2dfc7" podStartSLOduration=4.324872206 podStartE2EDuration="47.494615175s" podCreationTimestamp="2026-02-16 11:25:29 +0000 UTC" firstStartedPulling="2026-02-16 11:25:31.484598794 +0000 UTC m=+1126.204783764" lastFinishedPulling="2026-02-16 11:26:14.654341753 +0000 UTC m=+1169.374526733" observedRunningTime="2026-02-16 11:26:16.459247779 +0000 UTC m=+1171.179432759" watchObservedRunningTime="2026-02-16 11:26:16.494615175 +0000 UTC m=+1171.214800145" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.501366 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-cdc59674-z5klt" podStartSLOduration=8.052069178 podStartE2EDuration="10.50135035s" podCreationTimestamp="2026-02-16 11:26:06 +0000 UTC" firstStartedPulling="2026-02-16 11:26:12.150456349 +0000 UTC m=+1166.870641329" lastFinishedPulling="2026-02-16 11:26:14.599737531 +0000 UTC m=+1169.319922501" observedRunningTime="2026-02-16 11:26:16.494211624 +0000 UTC m=+1171.214396604" watchObservedRunningTime="2026-02-16 11:26:16.50135035 +0000 UTC m=+1171.221535330" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.530771 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-8697c4c9db-m5gfj"] Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.536360 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5cd6bd5769-dzjd4" podStartSLOduration=7.435987115 podStartE2EDuration="10.536338026s" podCreationTimestamp="2026-02-16 11:26:06 +0000 UTC" firstStartedPulling="2026-02-16 11:26:11.519391346 +0000 UTC m=+1166.239576326" lastFinishedPulling="2026-02-16 11:26:14.619742257 +0000 UTC m=+1169.339927237" observedRunningTime="2026-02-16 11:26:16.51598924 +0000 UTC m=+1171.236174220" watchObservedRunningTime="2026-02-16 11:26:16.536338026 +0000 UTC m=+1171.256523006" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.556851 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7d5b65d687-8mk7v"] Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.743993 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:16 crc kubenswrapper[4797]: I0216 11:26:16.748962 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:17 crc kubenswrapper[4797]: I0216 11:26:17.426095 4797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 11:26:17 crc kubenswrapper[4797]: I0216 11:26:17.652208 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 11:26:18 crc kubenswrapper[4797]: I0216 11:26:18.435978 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" podUID="6549c454-fd65-4edf-806e-bee17fdd8e4f" containerName="barbican-keystone-listener-log" containerID="cri-o://432ad19d9c61c9997e08d1abe339f473341881a839669c46d0ef2182a0ace6b6" gracePeriod=30 Feb 16 11:26:18 crc kubenswrapper[4797]: I0216 11:26:18.436798 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7d5b65d687-8mk7v" podUID="3d426800-6d74-4ef8-a726-a0edc1e0aadf" containerName="barbican-worker-log" containerID="cri-o://478dff8bf8ee7182f35f6c9e286f2ff374c8319064a40e8fc2708d004129d1e6" gracePeriod=30 Feb 16 11:26:18 crc kubenswrapper[4797]: I0216 11:26:18.437796 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" podUID="6549c454-fd65-4edf-806e-bee17fdd8e4f" containerName="barbican-keystone-listener" containerID="cri-o://4a31deee3984134bececb730b72e520bc3e28d37fbc3418159159bea308193c1" gracePeriod=30 Feb 16 11:26:18 crc kubenswrapper[4797]: I0216 11:26:18.438427 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-7d5b65d687-8mk7v" podUID="3d426800-6d74-4ef8-a726-a0edc1e0aadf" containerName="barbican-worker" containerID="cri-o://c7d76c7eb00eb8b7063dc4d1b2dc1cecf1ff880cdc6cc42b8453a3ec83c30155" gracePeriod=30 Feb 16 11:26:18 crc kubenswrapper[4797]: I0216 11:26:18.846146 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:18 crc kubenswrapper[4797]: I0216 11:26:18.948507 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:19 crc kubenswrapper[4797]: I0216 11:26:19.257114 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:19 crc kubenswrapper[4797]: I0216 11:26:19.449345 4797 generic.go:334] "Generic (PLEG): container finished" podID="3d426800-6d74-4ef8-a726-a0edc1e0aadf" containerID="c7d76c7eb00eb8b7063dc4d1b2dc1cecf1ff880cdc6cc42b8453a3ec83c30155" exitCode=0 Feb 16 11:26:19 crc kubenswrapper[4797]: I0216 11:26:19.449387 4797 generic.go:334] "Generic (PLEG): container finished" podID="3d426800-6d74-4ef8-a726-a0edc1e0aadf" containerID="478dff8bf8ee7182f35f6c9e286f2ff374c8319064a40e8fc2708d004129d1e6" exitCode=143 Feb 16 11:26:19 crc kubenswrapper[4797]: I0216 11:26:19.449435 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5b65d687-8mk7v" event={"ID":"3d426800-6d74-4ef8-a726-a0edc1e0aadf","Type":"ContainerDied","Data":"c7d76c7eb00eb8b7063dc4d1b2dc1cecf1ff880cdc6cc42b8453a3ec83c30155"} Feb 16 11:26:19 crc kubenswrapper[4797]: I0216 11:26:19.449468 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5b65d687-8mk7v" event={"ID":"3d426800-6d74-4ef8-a726-a0edc1e0aadf","Type":"ContainerDied","Data":"478dff8bf8ee7182f35f6c9e286f2ff374c8319064a40e8fc2708d004129d1e6"} Feb 16 11:26:19 crc kubenswrapper[4797]: I0216 11:26:19.451986 4797 generic.go:334] "Generic (PLEG): container finished" podID="6549c454-fd65-4edf-806e-bee17fdd8e4f" containerID="4a31deee3984134bececb730b72e520bc3e28d37fbc3418159159bea308193c1" exitCode=0 Feb 16 11:26:19 crc kubenswrapper[4797]: I0216 11:26:19.452015 4797 generic.go:334] "Generic (PLEG): container finished" podID="6549c454-fd65-4edf-806e-bee17fdd8e4f" containerID="432ad19d9c61c9997e08d1abe339f473341881a839669c46d0ef2182a0ace6b6" exitCode=143 Feb 16 11:26:19 crc kubenswrapper[4797]: I0216 11:26:19.452988 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" event={"ID":"6549c454-fd65-4edf-806e-bee17fdd8e4f","Type":"ContainerDied","Data":"4a31deee3984134bececb730b72e520bc3e28d37fbc3418159159bea308193c1"} Feb 16 11:26:19 crc kubenswrapper[4797]: I0216 11:26:19.453021 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" event={"ID":"6549c454-fd65-4edf-806e-bee17fdd8e4f","Type":"ContainerDied","Data":"432ad19d9c61c9997e08d1abe339f473341881a839669c46d0ef2182a0ace6b6"} Feb 16 11:26:19 crc kubenswrapper[4797]: I0216 11:26:19.723353 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 11:26:21 crc kubenswrapper[4797]: I0216 11:26:21.501111 4797 generic.go:334] "Generic (PLEG): container finished" podID="062948d0-fd09-4e11-904d-a346a430ee4f" containerID="ead106d265fee7368ebd34864028cdba49a23285426e71c60e8aad6dc35e7e9f" exitCode=0 Feb 16 11:26:21 crc kubenswrapper[4797]: I0216 11:26:21.501437 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2dfc7" event={"ID":"062948d0-fd09-4e11-904d-a346a430ee4f","Type":"ContainerDied","Data":"ead106d265fee7368ebd34864028cdba49a23285426e71c60e8aad6dc35e7e9f"} Feb 16 11:26:21 crc kubenswrapper[4797]: I0216 11:26:21.649784 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:21 crc kubenswrapper[4797]: I0216 11:26:21.708068 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9kfc4"] Feb 16 11:26:21 crc kubenswrapper[4797]: I0216 11:26:21.708343 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-9kfc4" podUID="5984b22e-1ba0-4050-a595-28423d93bc33" containerName="dnsmasq-dns" containerID="cri-o://e93f7d344bac97a6c4a7957c3a3fc983901c80194483b2f7a840f663f2d50ccf" gracePeriod=10 Feb 16 11:26:21 crc kubenswrapper[4797]: E0216 11:26:21.984605 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:26:22 crc kubenswrapper[4797]: I0216 11:26:22.325145 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9kfc4" podUID="5984b22e-1ba0-4050-a595-28423d93bc33" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Feb 16 11:26:24 crc kubenswrapper[4797]: I0216 11:26:24.191661 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7d9f6f7d6-62qcx" Feb 16 11:26:24 crc kubenswrapper[4797]: I0216 11:26:24.262087 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-f675df6c4-7jpbc"] Feb 16 11:26:24 crc kubenswrapper[4797]: I0216 11:26:24.262373 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-f675df6c4-7jpbc" podUID="39972561-a4a4-45aa-939d-0c1d194d603a" containerName="barbican-api-log" containerID="cri-o://6647628e009feed879b41e1f8aa79b296a228de3b27dae1695adf9d93b99397a" gracePeriod=30 Feb 16 11:26:24 crc kubenswrapper[4797]: I0216 11:26:24.262717 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-f675df6c4-7jpbc" podUID="39972561-a4a4-45aa-939d-0c1d194d603a" containerName="barbican-api" containerID="cri-o://e7e6be99223fbfa79b49bfe727cb1ac622860db5114a1ca8083ad0840241172e" gracePeriod=30 Feb 16 11:26:28 crc kubenswrapper[4797]: I0216 11:26:26.548308 4797 generic.go:334] "Generic (PLEG): container finished" podID="5984b22e-1ba0-4050-a595-28423d93bc33" containerID="e93f7d344bac97a6c4a7957c3a3fc983901c80194483b2f7a840f663f2d50ccf" exitCode=0 Feb 16 11:26:28 crc kubenswrapper[4797]: I0216 11:26:26.548393 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9kfc4" event={"ID":"5984b22e-1ba0-4050-a595-28423d93bc33","Type":"ContainerDied","Data":"e93f7d344bac97a6c4a7957c3a3fc983901c80194483b2f7a840f663f2d50ccf"} Feb 16 11:26:28 crc kubenswrapper[4797]: I0216 11:26:27.322924 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9kfc4" podUID="5984b22e-1ba0-4050-a595-28423d93bc33" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Feb 16 11:26:28 crc kubenswrapper[4797]: I0216 11:26:27.417532 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-f675df6c4-7jpbc" podUID="39972561-a4a4-45aa-939d-0c1d194d603a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": read tcp 10.217.0.2:35188->10.217.0.180:9311: read: connection reset by peer" Feb 16 11:26:28 crc kubenswrapper[4797]: I0216 11:26:27.417615 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-f675df6c4-7jpbc" podUID="39972561-a4a4-45aa-939d-0c1d194d603a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.180:9311/healthcheck\": read tcp 10.217.0.2:35204->10.217.0.180:9311: read: connection reset by peer" Feb 16 11:26:28 crc kubenswrapper[4797]: I0216 11:26:27.562166 4797 generic.go:334] "Generic (PLEG): container finished" podID="39972561-a4a4-45aa-939d-0c1d194d603a" containerID="6647628e009feed879b41e1f8aa79b296a228de3b27dae1695adf9d93b99397a" exitCode=143 Feb 16 11:26:28 crc kubenswrapper[4797]: I0216 11:26:27.562223 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f675df6c4-7jpbc" event={"ID":"39972561-a4a4-45aa-939d-0c1d194d603a","Type":"ContainerDied","Data":"6647628e009feed879b41e1f8aa79b296a228de3b27dae1695adf9d93b99397a"} Feb 16 11:26:28 crc kubenswrapper[4797]: I0216 11:26:28.577308 4797 generic.go:334] "Generic (PLEG): container finished" podID="39972561-a4a4-45aa-939d-0c1d194d603a" containerID="e7e6be99223fbfa79b49bfe727cb1ac622860db5114a1ca8083ad0840241172e" exitCode=0 Feb 16 11:26:28 crc kubenswrapper[4797]: I0216 11:26:28.577409 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f675df6c4-7jpbc" event={"ID":"39972561-a4a4-45aa-939d-0c1d194d603a","Type":"ContainerDied","Data":"e7e6be99223fbfa79b49bfe727cb1ac622860db5114a1ca8083ad0840241172e"} Feb 16 11:26:28 crc kubenswrapper[4797]: I0216 11:26:28.967671 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.025801 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.035931 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.114768 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-db-sync-config-data\") pod \"062948d0-fd09-4e11-904d-a346a430ee4f\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.114818 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-scripts\") pod \"062948d0-fd09-4e11-904d-a346a430ee4f\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.114879 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-config-data\") pod \"062948d0-fd09-4e11-904d-a346a430ee4f\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.114930 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-combined-ca-bundle\") pod \"6549c454-fd65-4edf-806e-bee17fdd8e4f\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.114953 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-config-data\") pod \"6549c454-fd65-4edf-806e-bee17fdd8e4f\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.114985 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-combined-ca-bundle\") pod \"062948d0-fd09-4e11-904d-a346a430ee4f\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.115015 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r988g\" (UniqueName: \"kubernetes.io/projected/6549c454-fd65-4edf-806e-bee17fdd8e4f-kube-api-access-r988g\") pod \"6549c454-fd65-4edf-806e-bee17fdd8e4f\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.115087 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwnxg\" (UniqueName: \"kubernetes.io/projected/062948d0-fd09-4e11-904d-a346a430ee4f-kube-api-access-nwnxg\") pod \"062948d0-fd09-4e11-904d-a346a430ee4f\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.115118 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6549c454-fd65-4edf-806e-bee17fdd8e4f-logs\") pod \"6549c454-fd65-4edf-806e-bee17fdd8e4f\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.115160 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-config-data-custom\") pod \"6549c454-fd65-4edf-806e-bee17fdd8e4f\" (UID: \"6549c454-fd65-4edf-806e-bee17fdd8e4f\") " Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.115188 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/062948d0-fd09-4e11-904d-a346a430ee4f-etc-machine-id\") pod \"062948d0-fd09-4e11-904d-a346a430ee4f\" (UID: \"062948d0-fd09-4e11-904d-a346a430ee4f\") " Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.115705 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/062948d0-fd09-4e11-904d-a346a430ee4f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "062948d0-fd09-4e11-904d-a346a430ee4f" (UID: "062948d0-fd09-4e11-904d-a346a430ee4f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.116213 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6549c454-fd65-4edf-806e-bee17fdd8e4f-logs" (OuterVolumeSpecName: "logs") pod "6549c454-fd65-4edf-806e-bee17fdd8e4f" (UID: "6549c454-fd65-4edf-806e-bee17fdd8e4f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.120327 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062948d0-fd09-4e11-904d-a346a430ee4f-kube-api-access-nwnxg" (OuterVolumeSpecName: "kube-api-access-nwnxg") pod "062948d0-fd09-4e11-904d-a346a430ee4f" (UID: "062948d0-fd09-4e11-904d-a346a430ee4f"). InnerVolumeSpecName "kube-api-access-nwnxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.120662 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "062948d0-fd09-4e11-904d-a346a430ee4f" (UID: "062948d0-fd09-4e11-904d-a346a430ee4f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.120789 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6549c454-fd65-4edf-806e-bee17fdd8e4f" (UID: "6549c454-fd65-4edf-806e-bee17fdd8e4f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.144452 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6549c454-fd65-4edf-806e-bee17fdd8e4f-kube-api-access-r988g" (OuterVolumeSpecName: "kube-api-access-r988g") pod "6549c454-fd65-4edf-806e-bee17fdd8e4f" (UID: "6549c454-fd65-4edf-806e-bee17fdd8e4f"). InnerVolumeSpecName "kube-api-access-r988g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.148920 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-scripts" (OuterVolumeSpecName: "scripts") pod "062948d0-fd09-4e11-904d-a346a430ee4f" (UID: "062948d0-fd09-4e11-904d-a346a430ee4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.151909 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6549c454-fd65-4edf-806e-bee17fdd8e4f" (UID: "6549c454-fd65-4edf-806e-bee17fdd8e4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.153306 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "062948d0-fd09-4e11-904d-a346a430ee4f" (UID: "062948d0-fd09-4e11-904d-a346a430ee4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.229567 4797 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.244823 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.245061 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.245303 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.245384 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r988g\" (UniqueName: \"kubernetes.io/projected/6549c454-fd65-4edf-806e-bee17fdd8e4f-kube-api-access-r988g\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.245464 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwnxg\" (UniqueName: \"kubernetes.io/projected/062948d0-fd09-4e11-904d-a346a430ee4f-kube-api-access-nwnxg\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.245549 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6549c454-fd65-4edf-806e-bee17fdd8e4f-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.245727 4797 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.245812 4797 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/062948d0-fd09-4e11-904d-a346a430ee4f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.230761 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-config-data" (OuterVolumeSpecName: "config-data") pod "6549c454-fd65-4edf-806e-bee17fdd8e4f" (UID: "6549c454-fd65-4edf-806e-bee17fdd8e4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.244749 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-config-data" (OuterVolumeSpecName: "config-data") pod "062948d0-fd09-4e11-904d-a346a430ee4f" (UID: "062948d0-fd09-4e11-904d-a346a430ee4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.249063 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-664675cd85-bc4lp"] Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.259058 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-664675cd85-bc4lp" podUID="a81700a8-2372-4b4d-a769-5b6936ac7aba" containerName="neutron-api" containerID="cri-o://e2b2843301100c20f6d491d59b77cb561092131dec8e408574b216fc6ab4b253" gracePeriod=30 Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.259665 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-664675cd85-bc4lp" podUID="a81700a8-2372-4b4d-a769-5b6936ac7aba" containerName="neutron-httpd" containerID="cri-o://5a4348f49bfc20b82cc85fd8e572727c854342e84dc19a338bc49230af9a08db" gracePeriod=30 Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.298251 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6fd696486f-x6hfl"] Feb 16 11:26:29 crc kubenswrapper[4797]: E0216 11:26:29.298766 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff75593-1e2b-47c3-8219-2105ebaca44d" containerName="dnsmasq-dns" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.298781 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff75593-1e2b-47c3-8219-2105ebaca44d" containerName="dnsmasq-dns" Feb 16 11:26:29 crc kubenswrapper[4797]: E0216 11:26:29.298791 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fff75593-1e2b-47c3-8219-2105ebaca44d" containerName="init" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.298798 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="fff75593-1e2b-47c3-8219-2105ebaca44d" containerName="init" Feb 16 11:26:29 crc kubenswrapper[4797]: E0216 11:26:29.298814 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32fb1a8e-1c20-4c94-9874-86adb1a9314b" containerName="barbican-api" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.298822 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="32fb1a8e-1c20-4c94-9874-86adb1a9314b" containerName="barbican-api" Feb 16 11:26:29 crc kubenswrapper[4797]: E0216 11:26:29.298838 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6549c454-fd65-4edf-806e-bee17fdd8e4f" containerName="barbican-keystone-listener" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.298846 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="6549c454-fd65-4edf-806e-bee17fdd8e4f" containerName="barbican-keystone-listener" Feb 16 11:26:29 crc kubenswrapper[4797]: E0216 11:26:29.298861 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6549c454-fd65-4edf-806e-bee17fdd8e4f" containerName="barbican-keystone-listener-log" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.298869 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="6549c454-fd65-4edf-806e-bee17fdd8e4f" containerName="barbican-keystone-listener-log" Feb 16 11:26:29 crc kubenswrapper[4797]: E0216 11:26:29.298879 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062948d0-fd09-4e11-904d-a346a430ee4f" containerName="cinder-db-sync" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.298886 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="062948d0-fd09-4e11-904d-a346a430ee4f" containerName="cinder-db-sync" Feb 16 11:26:29 crc kubenswrapper[4797]: E0216 11:26:29.298910 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32fb1a8e-1c20-4c94-9874-86adb1a9314b" containerName="barbican-api-log" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.298918 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="32fb1a8e-1c20-4c94-9874-86adb1a9314b" containerName="barbican-api-log" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.306927 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="32fb1a8e-1c20-4c94-9874-86adb1a9314b" containerName="barbican-api-log" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.306993 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="fff75593-1e2b-47c3-8219-2105ebaca44d" containerName="dnsmasq-dns" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.307011 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="062948d0-fd09-4e11-904d-a346a430ee4f" containerName="cinder-db-sync" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.307023 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="6549c454-fd65-4edf-806e-bee17fdd8e4f" containerName="barbican-keystone-listener-log" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.307038 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="32fb1a8e-1c20-4c94-9874-86adb1a9314b" containerName="barbican-api" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.307055 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="6549c454-fd65-4edf-806e-bee17fdd8e4f" containerName="barbican-keystone-listener" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.308407 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.309094 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.318504 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fd696486f-x6hfl"] Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.354611 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/062948d0-fd09-4e11-904d-a346a430ee4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.354641 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6549c454-fd65-4edf-806e-bee17fdd8e4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.456315 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-ovndb-tls-certs\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.456596 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-internal-tls-certs\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.456626 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-public-tls-certs\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.456647 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-config\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.456722 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f65rg\" (UniqueName: \"kubernetes.io/projected/247490ab-e07e-4491-854a-1adda964c68a-kube-api-access-f65rg\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.456737 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-httpd-config\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.456770 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-combined-ca-bundle\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.561205 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f65rg\" (UniqueName: \"kubernetes.io/projected/247490ab-e07e-4491-854a-1adda964c68a-kube-api-access-f65rg\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.561317 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-httpd-config\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.561478 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-combined-ca-bundle\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.561645 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-ovndb-tls-certs\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.561897 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-internal-tls-certs\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.561995 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-public-tls-certs\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.562031 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-config\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.565297 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-httpd-config\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.566252 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-ovndb-tls-certs\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.568172 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-public-tls-certs\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.569179 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-internal-tls-certs\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.575868 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-config\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.587164 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f65rg\" (UniqueName: \"kubernetes.io/projected/247490ab-e07e-4491-854a-1adda964c68a-kube-api-access-f65rg\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.588560 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/247490ab-e07e-4491-854a-1adda964c68a-combined-ca-bundle\") pod \"neutron-6fd696486f-x6hfl\" (UID: \"247490ab-e07e-4491-854a-1adda964c68a\") " pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.614053 4797 generic.go:334] "Generic (PLEG): container finished" podID="a81700a8-2372-4b4d-a769-5b6936ac7aba" containerID="5a4348f49bfc20b82cc85fd8e572727c854342e84dc19a338bc49230af9a08db" exitCode=0 Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.614234 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-664675cd85-bc4lp" event={"ID":"a81700a8-2372-4b4d-a769-5b6936ac7aba","Type":"ContainerDied","Data":"5a4348f49bfc20b82cc85fd8e572727c854342e84dc19a338bc49230af9a08db"} Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.617152 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" event={"ID":"6549c454-fd65-4edf-806e-bee17fdd8e4f","Type":"ContainerDied","Data":"7638de4e78c16fd93469b9c754694f8cc6d497efff8e0404eb2136a692105512"} Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.617223 4797 scope.go:117] "RemoveContainer" containerID="4a31deee3984134bececb730b72e520bc3e28d37fbc3418159159bea308193c1" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.617415 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-8697c4c9db-m5gfj" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.626592 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2dfc7" event={"ID":"062948d0-fd09-4e11-904d-a346a430ee4f","Type":"ContainerDied","Data":"646c22217dbbc8d47779de87ff8c2f61699173367eb80cc977ac316647c6cf26"} Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.626630 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="646c22217dbbc8d47779de87ff8c2f61699173367eb80cc977ac316647c6cf26" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.626696 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2dfc7" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.639929 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.724616 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-8697c4c9db-m5gfj"] Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.736559 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-8697c4c9db-m5gfj"] Feb 16 11:26:29 crc kubenswrapper[4797]: I0216 11:26:29.997624 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6549c454-fd65-4edf-806e-bee17fdd8e4f" path="/var/lib/kubelet/pods/6549c454-fd65-4edf-806e-bee17fdd8e4f/volumes" Feb 16 11:26:30 crc kubenswrapper[4797]: E0216 11:26:30.125900 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Feb 16 11:26:30 crc kubenswrapper[4797]: E0216 11:26:30.126159 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h7pqs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a0649e0a-7249-45bd-ad8f-6c7e61456322): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 11:26:30 crc kubenswrapper[4797]: E0216 11:26:30.127899 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.149918 4797 scope.go:117] "RemoveContainer" containerID="432ad19d9c61c9997e08d1abe339f473341881a839669c46d0ef2182a0ace6b6" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.163263 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.178981 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.197573 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.275878 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-combined-ca-bundle\") pod \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.275968 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx2sw\" (UniqueName: \"kubernetes.io/projected/5984b22e-1ba0-4050-a595-28423d93bc33-kube-api-access-qx2sw\") pod \"5984b22e-1ba0-4050-a595-28423d93bc33\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.275994 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-config-data-custom\") pod \"39972561-a4a4-45aa-939d-0c1d194d603a\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.276043 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-config-data\") pod \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.276073 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-config\") pod \"5984b22e-1ba0-4050-a595-28423d93bc33\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.276102 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39972561-a4a4-45aa-939d-0c1d194d603a-logs\") pod \"39972561-a4a4-45aa-939d-0c1d194d603a\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.276179 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sbx4\" (UniqueName: \"kubernetes.io/projected/3d426800-6d74-4ef8-a726-a0edc1e0aadf-kube-api-access-4sbx4\") pod \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.276208 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-config-data\") pod \"39972561-a4a4-45aa-939d-0c1d194d603a\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.276269 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-ovsdbserver-sb\") pod \"5984b22e-1ba0-4050-a595-28423d93bc33\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.276326 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-ovsdbserver-nb\") pod \"5984b22e-1ba0-4050-a595-28423d93bc33\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.276361 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-config-data-custom\") pod \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.276403 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-dns-svc\") pod \"5984b22e-1ba0-4050-a595-28423d93bc33\" (UID: \"5984b22e-1ba0-4050-a595-28423d93bc33\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.276447 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d426800-6d74-4ef8-a726-a0edc1e0aadf-logs\") pod \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\" (UID: \"3d426800-6d74-4ef8-a726-a0edc1e0aadf\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.276476 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-combined-ca-bundle\") pod \"39972561-a4a4-45aa-939d-0c1d194d603a\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.276524 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phspb\" (UniqueName: \"kubernetes.io/projected/39972561-a4a4-45aa-939d-0c1d194d603a-kube-api-access-phspb\") pod \"39972561-a4a4-45aa-939d-0c1d194d603a\" (UID: \"39972561-a4a4-45aa-939d-0c1d194d603a\") " Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.285933 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d426800-6d74-4ef8-a726-a0edc1e0aadf-logs" (OuterVolumeSpecName: "logs") pod "3d426800-6d74-4ef8-a726-a0edc1e0aadf" (UID: "3d426800-6d74-4ef8-a726-a0edc1e0aadf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.292631 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39972561-a4a4-45aa-939d-0c1d194d603a-logs" (OuterVolumeSpecName: "logs") pod "39972561-a4a4-45aa-939d-0c1d194d603a" (UID: "39972561-a4a4-45aa-939d-0c1d194d603a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.292828 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39972561-a4a4-45aa-939d-0c1d194d603a-kube-api-access-phspb" (OuterVolumeSpecName: "kube-api-access-phspb") pod "39972561-a4a4-45aa-939d-0c1d194d603a" (UID: "39972561-a4a4-45aa-939d-0c1d194d603a"). InnerVolumeSpecName "kube-api-access-phspb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.292899 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d426800-6d74-4ef8-a726-a0edc1e0aadf-kube-api-access-4sbx4" (OuterVolumeSpecName: "kube-api-access-4sbx4") pod "3d426800-6d74-4ef8-a726-a0edc1e0aadf" (UID: "3d426800-6d74-4ef8-a726-a0edc1e0aadf"). InnerVolumeSpecName "kube-api-access-4sbx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.317155 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5984b22e-1ba0-4050-a595-28423d93bc33-kube-api-access-qx2sw" (OuterVolumeSpecName: "kube-api-access-qx2sw") pod "5984b22e-1ba0-4050-a595-28423d93bc33" (UID: "5984b22e-1ba0-4050-a595-28423d93bc33"). InnerVolumeSpecName "kube-api-access-qx2sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.321986 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3d426800-6d74-4ef8-a726-a0edc1e0aadf" (UID: "3d426800-6d74-4ef8-a726-a0edc1e0aadf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.379285 4797 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.379319 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d426800-6d74-4ef8-a726-a0edc1e0aadf-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.379330 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phspb\" (UniqueName: \"kubernetes.io/projected/39972561-a4a4-45aa-939d-0c1d194d603a-kube-api-access-phspb\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.379340 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx2sw\" (UniqueName: \"kubernetes.io/projected/5984b22e-1ba0-4050-a595-28423d93bc33-kube-api-access-qx2sw\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.379349 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39972561-a4a4-45aa-939d-0c1d194d603a-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.379357 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sbx4\" (UniqueName: \"kubernetes.io/projected/3d426800-6d74-4ef8-a726-a0edc1e0aadf-kube-api-access-4sbx4\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.407828 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "39972561-a4a4-45aa-939d-0c1d194d603a" (UID: "39972561-a4a4-45aa-939d-0c1d194d603a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.407907 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 11:26:30 crc kubenswrapper[4797]: E0216 11:26:30.408409 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d426800-6d74-4ef8-a726-a0edc1e0aadf" containerName="barbican-worker-log" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.408424 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d426800-6d74-4ef8-a726-a0edc1e0aadf" containerName="barbican-worker-log" Feb 16 11:26:30 crc kubenswrapper[4797]: E0216 11:26:30.408434 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5984b22e-1ba0-4050-a595-28423d93bc33" containerName="dnsmasq-dns" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.408442 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5984b22e-1ba0-4050-a595-28423d93bc33" containerName="dnsmasq-dns" Feb 16 11:26:30 crc kubenswrapper[4797]: E0216 11:26:30.408453 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39972561-a4a4-45aa-939d-0c1d194d603a" containerName="barbican-api-log" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.408462 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="39972561-a4a4-45aa-939d-0c1d194d603a" containerName="barbican-api-log" Feb 16 11:26:30 crc kubenswrapper[4797]: E0216 11:26:30.408474 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d426800-6d74-4ef8-a726-a0edc1e0aadf" containerName="barbican-worker" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.408482 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d426800-6d74-4ef8-a726-a0edc1e0aadf" containerName="barbican-worker" Feb 16 11:26:30 crc kubenswrapper[4797]: E0216 11:26:30.408511 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5984b22e-1ba0-4050-a595-28423d93bc33" containerName="init" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.408518 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5984b22e-1ba0-4050-a595-28423d93bc33" containerName="init" Feb 16 11:26:30 crc kubenswrapper[4797]: E0216 11:26:30.408538 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39972561-a4a4-45aa-939d-0c1d194d603a" containerName="barbican-api" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.408545 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="39972561-a4a4-45aa-939d-0c1d194d603a" containerName="barbican-api" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.408813 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="39972561-a4a4-45aa-939d-0c1d194d603a" containerName="barbican-api-log" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.408834 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="39972561-a4a4-45aa-939d-0c1d194d603a" containerName="barbican-api" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.408849 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d426800-6d74-4ef8-a726-a0edc1e0aadf" containerName="barbican-worker-log" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.408862 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="5984b22e-1ba0-4050-a595-28423d93bc33" containerName="dnsmasq-dns" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.408877 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d426800-6d74-4ef8-a726-a0edc1e0aadf" containerName="barbican-worker" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.414149 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.430358 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.430519 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-992vq" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.430607 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.431359 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.441327 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.460659 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-khspv"] Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.462425 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.484345 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5cb0a42d-c78d-40ae-b936-7ba7b7749437-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.484694 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.484789 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v52g6\" (UniqueName: \"kubernetes.io/projected/5cb0a42d-c78d-40ae-b936-7ba7b7749437-kube-api-access-v52g6\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.484918 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.518999 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-config-data" (OuterVolumeSpecName: "config-data") pod "3d426800-6d74-4ef8-a726-a0edc1e0aadf" (UID: "3d426800-6d74-4ef8-a726-a0edc1e0aadf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.519243 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-scripts\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.519304 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-config-data\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.519681 4797 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.519699 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.568977 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-khspv"] Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.586393 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5984b22e-1ba0-4050-a595-28423d93bc33" (UID: "5984b22e-1ba0-4050-a595-28423d93bc33"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.592300 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39972561-a4a4-45aa-939d-0c1d194d603a" (UID: "39972561-a4a4-45aa-939d-0c1d194d603a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.606204 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d426800-6d74-4ef8-a726-a0edc1e0aadf" (UID: "3d426800-6d74-4ef8-a726-a0edc1e0aadf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.613878 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-config-data" (OuterVolumeSpecName: "config-data") pod "39972561-a4a4-45aa-939d-0c1d194d603a" (UID: "39972561-a4a4-45aa-939d-0c1d194d603a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.614553 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5984b22e-1ba0-4050-a595-28423d93bc33" (UID: "5984b22e-1ba0-4050-a595-28423d93bc33"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.623828 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5984b22e-1ba0-4050-a595-28423d93bc33" (UID: "5984b22e-1ba0-4050-a595-28423d93bc33"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.628089 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-config" (OuterVolumeSpecName: "config") pod "5984b22e-1ba0-4050-a595-28423d93bc33" (UID: "5984b22e-1ba0-4050-a595-28423d93bc33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.629753 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.629892 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.629984 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-config\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630084 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-scripts\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630120 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-config-data\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630190 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630223 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630333 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630357 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5cb0a42d-c78d-40ae-b936-7ba7b7749437-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630415 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpwsf\" (UniqueName: \"kubernetes.io/projected/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-kube-api-access-fpwsf\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630457 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v52g6\" (UniqueName: \"kubernetes.io/projected/5cb0a42d-c78d-40ae-b936-7ba7b7749437-kube-api-access-v52g6\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630549 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630645 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d426800-6d74-4ef8-a726-a0edc1e0aadf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630661 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630671 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630679 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630688 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630731 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5984b22e-1ba0-4050-a595-28423d93bc33-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630750 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39972561-a4a4-45aa-939d-0c1d194d603a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.630760 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5cb0a42d-c78d-40ae-b936-7ba7b7749437-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.636078 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.636190 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.636431 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-scripts\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.646946 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f675df6c4-7jpbc" event={"ID":"39972561-a4a4-45aa-939d-0c1d194d603a","Type":"ContainerDied","Data":"6943a22f6eaa2c62aed8e583b0df306fa341db6659f6d8a9ed061f990190992a"} Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.647192 4797 scope.go:117] "RemoveContainer" containerID="e7e6be99223fbfa79b49bfe727cb1ac622860db5114a1ca8083ad0840241172e" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.647498 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f675df6c4-7jpbc" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.650561 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-config-data\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.651328 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v52g6\" (UniqueName: \"kubernetes.io/projected/5cb0a42d-c78d-40ae-b936-7ba7b7749437-kube-api-access-v52g6\") pod \"cinder-scheduler-0\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.658513 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7d5b65d687-8mk7v" event={"ID":"3d426800-6d74-4ef8-a726-a0edc1e0aadf","Type":"ContainerDied","Data":"c6a0741cd387fcb7a224e71462c5857776eb085974db7a6832f7821fafef11d5"} Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.658738 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7d5b65d687-8mk7v" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.672242 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.681037 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerName="ceilometer-central-agent" containerID="cri-o://34fab4cc55adc1a4ff05f3d8123f52bbaf777fcbcf8b214b844d1782f191e045" gracePeriod=30 Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.681329 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9kfc4" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.682060 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerName="ceilometer-notification-agent" containerID="cri-o://3a4fec4acf58ad5024698898910bc1c83246e0892c3afd601b13010a18c5474e" gracePeriod=30 Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.682177 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerName="sg-core" containerID="cri-o://8ec571c2b817b180635e617a198eb9781578ffc63b6f1112199cd05770192a1f" gracePeriod=30 Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.688652 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9kfc4" event={"ID":"5984b22e-1ba0-4050-a595-28423d93bc33","Type":"ContainerDied","Data":"fb4df4c3d33bc02ae4f586ca6aa3eb588bc18120d5857c804bad793274cf30a4"} Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.688774 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.693301 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.694367 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.708262 4797 scope.go:117] "RemoveContainer" containerID="6647628e009feed879b41e1f8aa79b296a228de3b27dae1695adf9d93b99397a" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.731860 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-config\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.731932 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.731949 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.732012 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpwsf\" (UniqueName: \"kubernetes.io/projected/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-kube-api-access-fpwsf\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.732068 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.732096 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.732985 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.733631 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-config\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.734143 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.735270 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.736712 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.759694 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpwsf\" (UniqueName: \"kubernetes.io/projected/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-kube-api-access-fpwsf\") pod \"dnsmasq-dns-5c9776ccc5-khspv\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.833892 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.833950 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-config-data-custom\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.834009 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-logs\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.834078 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-config-data\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.834134 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmtct\" (UniqueName: \"kubernetes.io/projected/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-kube-api-access-rmtct\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.834168 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.834341 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-scripts\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.854707 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.872665 4797 scope.go:117] "RemoveContainer" containerID="c7d76c7eb00eb8b7063dc4d1b2dc1cecf1ff880cdc6cc42b8453a3ec83c30155" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.892847 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.904138 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-f675df6c4-7jpbc"] Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.921799 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-f675df6c4-7jpbc"] Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.932955 4797 scope.go:117] "RemoveContainer" containerID="478dff8bf8ee7182f35f6c9e286f2ff374c8319064a40e8fc2708d004129d1e6" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.936431 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.937875 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-config-data-custom\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.939866 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-logs\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.939957 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-config-data\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.940045 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmtct\" (UniqueName: \"kubernetes.io/projected/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-kube-api-access-rmtct\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.940049 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-7d5b65d687-8mk7v"] Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.940091 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.940226 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-scripts\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.940663 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.941283 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-logs\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.942177 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.944536 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-scripts\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.944841 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-config-data-custom\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.946501 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-config-data\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.961145 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmtct\" (UniqueName: \"kubernetes.io/projected/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-kube-api-access-rmtct\") pod \"cinder-api-0\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " pod="openstack/cinder-api-0" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.961949 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-7d5b65d687-8mk7v"] Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.974065 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9kfc4"] Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.974901 4797 scope.go:117] "RemoveContainer" containerID="e93f7d344bac97a6c4a7957c3a3fc983901c80194483b2f7a840f663f2d50ccf" Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.983106 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9kfc4"] Feb 16 11:26:30 crc kubenswrapper[4797]: I0216 11:26:30.991561 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6fd696486f-x6hfl"] Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.010234 4797 scope.go:117] "RemoveContainer" containerID="be6eaf0900da0384397bec37db1b3e17b142d9e55adfd705cbe70c5ee793ffde" Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.220072 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.439271 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.603064 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-khspv"] Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.758053 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5cb0a42d-c78d-40ae-b936-7ba7b7749437","Type":"ContainerStarted","Data":"93b7be5474507e2953a5342c820215176751bfb1729a929e850134baf336bcc5"} Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.761080 4797 generic.go:334] "Generic (PLEG): container finished" podID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerID="8ec571c2b817b180635e617a198eb9781578ffc63b6f1112199cd05770192a1f" exitCode=2 Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.761112 4797 generic.go:334] "Generic (PLEG): container finished" podID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerID="3a4fec4acf58ad5024698898910bc1c83246e0892c3afd601b13010a18c5474e" exitCode=0 Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.761122 4797 generic.go:334] "Generic (PLEG): container finished" podID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerID="34fab4cc55adc1a4ff05f3d8123f52bbaf777fcbcf8b214b844d1782f191e045" exitCode=0 Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.761158 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0649e0a-7249-45bd-ad8f-6c7e61456322","Type":"ContainerDied","Data":"8ec571c2b817b180635e617a198eb9781578ffc63b6f1112199cd05770192a1f"} Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.761190 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0649e0a-7249-45bd-ad8f-6c7e61456322","Type":"ContainerDied","Data":"3a4fec4acf58ad5024698898910bc1c83246e0892c3afd601b13010a18c5474e"} Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.761200 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0649e0a-7249-45bd-ad8f-6c7e61456322","Type":"ContainerDied","Data":"34fab4cc55adc1a4ff05f3d8123f52bbaf777fcbcf8b214b844d1782f191e045"} Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.765283 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fd696486f-x6hfl" event={"ID":"247490ab-e07e-4491-854a-1adda964c68a","Type":"ContainerStarted","Data":"7eaa42d819e3a1df781e4421d3ef5404d00c87fb62135d0e79159ef1e716cd39"} Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.765335 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fd696486f-x6hfl" event={"ID":"247490ab-e07e-4491-854a-1adda964c68a","Type":"ContainerStarted","Data":"d7c115d62a47524252b5ac874c4e0b4d5665a76b93cb09c4b8811af7ee91f34c"} Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.787557 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-664675cd85-bc4lp" podUID="a81700a8-2372-4b4d-a769-5b6936ac7aba" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.169:9696/\": dial tcp 10.217.0.169:9696: connect: connection refused" Feb 16 11:26:31 crc kubenswrapper[4797]: I0216 11:26:31.827633 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.021973 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39972561-a4a4-45aa-939d-0c1d194d603a" path="/var/lib/kubelet/pods/39972561-a4a4-45aa-939d-0c1d194d603a/volumes" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.023327 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d426800-6d74-4ef8-a726-a0edc1e0aadf" path="/var/lib/kubelet/pods/3d426800-6d74-4ef8-a726-a0edc1e0aadf/volumes" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.023945 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5984b22e-1ba0-4050-a595-28423d93bc33" path="/var/lib/kubelet/pods/5984b22e-1ba0-4050-a595-28423d93bc33/volumes" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.147460 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.192412 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.281317 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-sg-core-conf-yaml\") pod \"a0649e0a-7249-45bd-ad8f-6c7e61456322\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.281366 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0649e0a-7249-45bd-ad8f-6c7e61456322-run-httpd\") pod \"a0649e0a-7249-45bd-ad8f-6c7e61456322\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.281387 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-scripts\") pod \"a0649e0a-7249-45bd-ad8f-6c7e61456322\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.281452 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-config-data\") pod \"a0649e0a-7249-45bd-ad8f-6c7e61456322\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.281505 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0649e0a-7249-45bd-ad8f-6c7e61456322-log-httpd\") pod \"a0649e0a-7249-45bd-ad8f-6c7e61456322\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.281552 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7pqs\" (UniqueName: \"kubernetes.io/projected/a0649e0a-7249-45bd-ad8f-6c7e61456322-kube-api-access-h7pqs\") pod \"a0649e0a-7249-45bd-ad8f-6c7e61456322\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.281619 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-combined-ca-bundle\") pod \"a0649e0a-7249-45bd-ad8f-6c7e61456322\" (UID: \"a0649e0a-7249-45bd-ad8f-6c7e61456322\") " Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.283131 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0649e0a-7249-45bd-ad8f-6c7e61456322-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a0649e0a-7249-45bd-ad8f-6c7e61456322" (UID: "a0649e0a-7249-45bd-ad8f-6c7e61456322"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.284934 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0649e0a-7249-45bd-ad8f-6c7e61456322-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a0649e0a-7249-45bd-ad8f-6c7e61456322" (UID: "a0649e0a-7249-45bd-ad8f-6c7e61456322"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.285732 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-scripts" (OuterVolumeSpecName: "scripts") pod "a0649e0a-7249-45bd-ad8f-6c7e61456322" (UID: "a0649e0a-7249-45bd-ad8f-6c7e61456322"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.290805 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0649e0a-7249-45bd-ad8f-6c7e61456322-kube-api-access-h7pqs" (OuterVolumeSpecName: "kube-api-access-h7pqs") pod "a0649e0a-7249-45bd-ad8f-6c7e61456322" (UID: "a0649e0a-7249-45bd-ad8f-6c7e61456322"). InnerVolumeSpecName "kube-api-access-h7pqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.315326 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a0649e0a-7249-45bd-ad8f-6c7e61456322" (UID: "a0649e0a-7249-45bd-ad8f-6c7e61456322"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.364273 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-config-data" (OuterVolumeSpecName: "config-data") pod "a0649e0a-7249-45bd-ad8f-6c7e61456322" (UID: "a0649e0a-7249-45bd-ad8f-6c7e61456322"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.379114 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0649e0a-7249-45bd-ad8f-6c7e61456322" (UID: "a0649e0a-7249-45bd-ad8f-6c7e61456322"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.389615 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.389649 4797 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0649e0a-7249-45bd-ad8f-6c7e61456322-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.389659 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7pqs\" (UniqueName: \"kubernetes.io/projected/a0649e0a-7249-45bd-ad8f-6c7e61456322-kube-api-access-h7pqs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.389668 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.389678 4797 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.389687 4797 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a0649e0a-7249-45bd-ad8f-6c7e61456322-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.389695 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a0649e0a-7249-45bd-ad8f-6c7e61456322-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.788898 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9","Type":"ContainerStarted","Data":"a85e703cec5666cbc49a819826749b74cbd06f0dbee8a1cc394275c7b6ac00d9"} Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.788943 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9","Type":"ContainerStarted","Data":"f1a647c5c65bb7b7e88d89f4d1082f72fa483a67e77819dc455c66b24ff4469d"} Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.793015 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a0649e0a-7249-45bd-ad8f-6c7e61456322","Type":"ContainerDied","Data":"63163e339aeaaab65320b953243392c886b4585797a8e1377a2249e3978ef011"} Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.793046 4797 scope.go:117] "RemoveContainer" containerID="8ec571c2b817b180635e617a198eb9781578ffc63b6f1112199cd05770192a1f" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.793211 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.799287 4797 generic.go:334] "Generic (PLEG): container finished" podID="dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" containerID="440518e9c4ad2edd6bc4f33e5257246ae9d79c003d368ae09be140b7dd8954ab" exitCode=0 Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.799359 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" event={"ID":"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d","Type":"ContainerDied","Data":"440518e9c4ad2edd6bc4f33e5257246ae9d79c003d368ae09be140b7dd8954ab"} Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.799384 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" event={"ID":"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d","Type":"ContainerStarted","Data":"2b6f131cc26f2e0e2d5632e31bac4481cfa43959f104a620996d59a57f881faf"} Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.807011 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6fd696486f-x6hfl" event={"ID":"247490ab-e07e-4491-854a-1adda964c68a","Type":"ContainerStarted","Data":"1cba66b9d618259b18c3bccc98945f19dc53a52b9eba0769a975d26986a906c7"} Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.808335 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.839205 4797 scope.go:117] "RemoveContainer" containerID="3a4fec4acf58ad5024698898910bc1c83246e0892c3afd601b13010a18c5474e" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.913636 4797 scope.go:117] "RemoveContainer" containerID="34fab4cc55adc1a4ff05f3d8123f52bbaf777fcbcf8b214b844d1782f191e045" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.914121 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.938474 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.938857 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6fd696486f-x6hfl" podStartSLOduration=3.938844412 podStartE2EDuration="3.938844412s" podCreationTimestamp="2026-02-16 11:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:32.877741933 +0000 UTC m=+1187.597926913" watchObservedRunningTime="2026-02-16 11:26:32.938844412 +0000 UTC m=+1187.659029392" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.977083 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:26:32 crc kubenswrapper[4797]: E0216 11:26:32.980998 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerName="ceilometer-central-agent" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.981037 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerName="ceilometer-central-agent" Feb 16 11:26:32 crc kubenswrapper[4797]: E0216 11:26:32.981059 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerName="sg-core" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.981066 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerName="sg-core" Feb 16 11:26:32 crc kubenswrapper[4797]: E0216 11:26:32.981089 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerName="ceilometer-notification-agent" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.981096 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerName="ceilometer-notification-agent" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.981359 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerName="ceilometer-notification-agent" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.981377 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerName="ceilometer-central-agent" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.981395 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" containerName="sg-core" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.988115 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.988119 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.990096 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 11:26:32 crc kubenswrapper[4797]: I0216 11:26:32.990289 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.121161 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-config-data\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.121233 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwvk7\" (UniqueName: \"kubernetes.io/projected/5a6c5fd9-4966-4a79-8d43-dd87cf706681-kube-api-access-rwvk7\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.121260 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.121528 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a6c5fd9-4966-4a79-8d43-dd87cf706681-log-httpd\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.121620 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.121727 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a6c5fd9-4966-4a79-8d43-dd87cf706681-run-httpd\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.121791 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-scripts\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.223930 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a6c5fd9-4966-4a79-8d43-dd87cf706681-run-httpd\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.224018 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-scripts\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.224061 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-config-data\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.224100 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwvk7\" (UniqueName: \"kubernetes.io/projected/5a6c5fd9-4966-4a79-8d43-dd87cf706681-kube-api-access-rwvk7\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.224126 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.224167 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a6c5fd9-4966-4a79-8d43-dd87cf706681-log-httpd\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.224211 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.225081 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a6c5fd9-4966-4a79-8d43-dd87cf706681-run-httpd\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.230359 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a6c5fd9-4966-4a79-8d43-dd87cf706681-log-httpd\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.231023 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.233202 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-config-data\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.233298 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-scripts\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.234083 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.245330 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwvk7\" (UniqueName: \"kubernetes.io/projected/5a6c5fd9-4966-4a79-8d43-dd87cf706681-kube-api-access-rwvk7\") pod \"ceilometer-0\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.346330 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.828995 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9","Type":"ContainerStarted","Data":"3cab704f4baf21cdcdaaf1f82b43f28a1e827eef320a18861037622baaba056c"} Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.829686 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" containerName="cinder-api-log" containerID="cri-o://a85e703cec5666cbc49a819826749b74cbd06f0dbee8a1cc394275c7b6ac00d9" gracePeriod=30 Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.829759 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.830180 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" containerName="cinder-api" containerID="cri-o://3cab704f4baf21cdcdaaf1f82b43f28a1e827eef320a18861037622baaba056c" gracePeriod=30 Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.836985 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.840871 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5cb0a42d-c78d-40ae-b936-7ba7b7749437","Type":"ContainerStarted","Data":"2c42d243166c17cd9039170eede22fe00162e89eb64f9baebf17403f7824232c"} Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.840975 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5cb0a42d-c78d-40ae-b936-7ba7b7749437","Type":"ContainerStarted","Data":"39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71"} Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.857944 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" event={"ID":"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d","Type":"ContainerStarted","Data":"2c28b27d117a180c8980327f89df3aad3efa0ba7c2379ec434836fd99c08c365"} Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.858476 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:33 crc kubenswrapper[4797]: W0216 11:26:33.865252 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a6c5fd9_4966_4a79_8d43_dd87cf706681.slice/crio-71c21c7399b13e29a5d4b8a551057e956678efd13eeb76db51d74f64b3b30ace WatchSource:0}: Error finding container 71c21c7399b13e29a5d4b8a551057e956678efd13eeb76db51d74f64b3b30ace: Status 404 returned error can't find the container with id 71c21c7399b13e29a5d4b8a551057e956678efd13eeb76db51d74f64b3b30ace Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.869933 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.869912138 podStartE2EDuration="3.869912138s" podCreationTimestamp="2026-02-16 11:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:33.854499016 +0000 UTC m=+1188.574684016" watchObservedRunningTime="2026-02-16 11:26:33.869912138 +0000 UTC m=+1188.590097118" Feb 16 11:26:33 crc kubenswrapper[4797]: I0216 11:26:33.891443 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" podStartSLOduration=3.891417045 podStartE2EDuration="3.891417045s" podCreationTimestamp="2026-02-16 11:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:33.878486173 +0000 UTC m=+1188.598671173" watchObservedRunningTime="2026-02-16 11:26:33.891417045 +0000 UTC m=+1188.611602025" Feb 16 11:26:33 crc kubenswrapper[4797]: E0216 11:26:33.987022 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:26:34 crc kubenswrapper[4797]: I0216 11:26:34.007300 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0649e0a-7249-45bd-ad8f-6c7e61456322" path="/var/lib/kubelet/pods/a0649e0a-7249-45bd-ad8f-6c7e61456322/volumes" Feb 16 11:26:34 crc kubenswrapper[4797]: I0216 11:26:34.873788 4797 generic.go:334] "Generic (PLEG): container finished" podID="0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" containerID="a85e703cec5666cbc49a819826749b74cbd06f0dbee8a1cc394275c7b6ac00d9" exitCode=143 Feb 16 11:26:34 crc kubenswrapper[4797]: I0216 11:26:34.873854 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9","Type":"ContainerDied","Data":"a85e703cec5666cbc49a819826749b74cbd06f0dbee8a1cc394275c7b6ac00d9"} Feb 16 11:26:34 crc kubenswrapper[4797]: I0216 11:26:34.876148 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a6c5fd9-4966-4a79-8d43-dd87cf706681","Type":"ContainerStarted","Data":"9309714b13cc4ae0e604631d6cb4f4f2a2e22140f414242db05755f7b894689b"} Feb 16 11:26:34 crc kubenswrapper[4797]: I0216 11:26:34.876352 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a6c5fd9-4966-4a79-8d43-dd87cf706681","Type":"ContainerStarted","Data":"71c21c7399b13e29a5d4b8a551057e956678efd13eeb76db51d74f64b3b30ace"} Feb 16 11:26:34 crc kubenswrapper[4797]: I0216 11:26:34.901906 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.076132125 podStartE2EDuration="4.901887702s" podCreationTimestamp="2026-02-16 11:26:30 +0000 UTC" firstStartedPulling="2026-02-16 11:26:31.441041762 +0000 UTC m=+1186.161226742" lastFinishedPulling="2026-02-16 11:26:32.266797339 +0000 UTC m=+1186.986982319" observedRunningTime="2026-02-16 11:26:34.891407555 +0000 UTC m=+1189.611592535" watchObservedRunningTime="2026-02-16 11:26:34.901887702 +0000 UTC m=+1189.622072682" Feb 16 11:26:35 crc kubenswrapper[4797]: I0216 11:26:35.855806 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 11:26:35 crc kubenswrapper[4797]: I0216 11:26:35.888953 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a6c5fd9-4966-4a79-8d43-dd87cf706681","Type":"ContainerStarted","Data":"56d10d8514d89b841ab2c5557e81b8cf55e9197e50a392c6084db042d8e5161c"} Feb 16 11:26:36 crc kubenswrapper[4797]: I0216 11:26:36.901178 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a6c5fd9-4966-4a79-8d43-dd87cf706681","Type":"ContainerStarted","Data":"2b9b55ead49dd0329c8b513566c2c3243b20c84378a7145d82cac73c7e1f245a"} Feb 16 11:26:36 crc kubenswrapper[4797]: I0216 11:26:36.930138 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:37 crc kubenswrapper[4797]: I0216 11:26:37.784011 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:26:37 crc kubenswrapper[4797]: I0216 11:26:37.914561 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a6c5fd9-4966-4a79-8d43-dd87cf706681","Type":"ContainerStarted","Data":"61b0101d153c0a1978f9b731c493f10ade2e6d712b3b9ee6f5fa6f2b0547916e"} Feb 16 11:26:37 crc kubenswrapper[4797]: I0216 11:26:37.914878 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 11:26:37 crc kubenswrapper[4797]: I0216 11:26:37.944828 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.436032115 podStartE2EDuration="5.944805173s" podCreationTimestamp="2026-02-16 11:26:32 +0000 UTC" firstStartedPulling="2026-02-16 11:26:33.871238664 +0000 UTC m=+1188.591423644" lastFinishedPulling="2026-02-16 11:26:37.380011682 +0000 UTC m=+1192.100196702" observedRunningTime="2026-02-16 11:26:37.940442254 +0000 UTC m=+1192.660627234" watchObservedRunningTime="2026-02-16 11:26:37.944805173 +0000 UTC m=+1192.664990153" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.057225 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7fb7cc766-tfhd7"] Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.064968 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.098314 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fb7cc766-tfhd7"] Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.172320 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-combined-ca-bundle\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.172410 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dv7t\" (UniqueName: \"kubernetes.io/projected/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-kube-api-access-8dv7t\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.172467 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-config-data\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.172501 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-logs\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.172534 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-public-tls-certs\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.172633 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-internal-tls-certs\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.172688 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-scripts\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.274509 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-combined-ca-bundle\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.274609 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dv7t\" (UniqueName: \"kubernetes.io/projected/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-kube-api-access-8dv7t\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.274667 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-config-data\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.274698 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-logs\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.274725 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-public-tls-certs\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.274813 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-internal-tls-certs\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.274850 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-scripts\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.276349 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-logs\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.282270 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-scripts\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.282411 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-public-tls-certs\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.283073 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-config-data\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.283604 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-internal-tls-certs\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.284493 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-combined-ca-bundle\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.297293 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dv7t\" (UniqueName: \"kubernetes.io/projected/0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8-kube-api-access-8dv7t\") pod \"placement-7fb7cc766-tfhd7\" (UID: \"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8\") " pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.440735 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.541418 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-99c64f77c-dxwz8" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.639942 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.794652 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-combined-ca-bundle\") pod \"a81700a8-2372-4b4d-a769-5b6936ac7aba\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.794747 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-internal-tls-certs\") pod \"a81700a8-2372-4b4d-a769-5b6936ac7aba\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.794931 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-config\") pod \"a81700a8-2372-4b4d-a769-5b6936ac7aba\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.794994 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-public-tls-certs\") pod \"a81700a8-2372-4b4d-a769-5b6936ac7aba\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.795041 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whlwx\" (UniqueName: \"kubernetes.io/projected/a81700a8-2372-4b4d-a769-5b6936ac7aba-kube-api-access-whlwx\") pod \"a81700a8-2372-4b4d-a769-5b6936ac7aba\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.795095 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-httpd-config\") pod \"a81700a8-2372-4b4d-a769-5b6936ac7aba\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.795132 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-ovndb-tls-certs\") pod \"a81700a8-2372-4b4d-a769-5b6936ac7aba\" (UID: \"a81700a8-2372-4b4d-a769-5b6936ac7aba\") " Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.800450 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a81700a8-2372-4b4d-a769-5b6936ac7aba-kube-api-access-whlwx" (OuterVolumeSpecName: "kube-api-access-whlwx") pod "a81700a8-2372-4b4d-a769-5b6936ac7aba" (UID: "a81700a8-2372-4b4d-a769-5b6936ac7aba"). InnerVolumeSpecName "kube-api-access-whlwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.801026 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a81700a8-2372-4b4d-a769-5b6936ac7aba" (UID: "a81700a8-2372-4b4d-a769-5b6936ac7aba"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.871180 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a81700a8-2372-4b4d-a769-5b6936ac7aba" (UID: "a81700a8-2372-4b4d-a769-5b6936ac7aba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.880712 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a81700a8-2372-4b4d-a769-5b6936ac7aba" (UID: "a81700a8-2372-4b4d-a769-5b6936ac7aba"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.882805 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a81700a8-2372-4b4d-a769-5b6936ac7aba" (UID: "a81700a8-2372-4b4d-a769-5b6936ac7aba"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.886690 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-config" (OuterVolumeSpecName: "config") pod "a81700a8-2372-4b4d-a769-5b6936ac7aba" (UID: "a81700a8-2372-4b4d-a769-5b6936ac7aba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.897327 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.897362 4797 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.897372 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whlwx\" (UniqueName: \"kubernetes.io/projected/a81700a8-2372-4b4d-a769-5b6936ac7aba-kube-api-access-whlwx\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.897381 4797 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.897390 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.897397 4797 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.909855 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a81700a8-2372-4b4d-a769-5b6936ac7aba" (UID: "a81700a8-2372-4b4d-a769-5b6936ac7aba"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.924784 4797 generic.go:334] "Generic (PLEG): container finished" podID="a81700a8-2372-4b4d-a769-5b6936ac7aba" containerID="e2b2843301100c20f6d491d59b77cb561092131dec8e408574b216fc6ab4b253" exitCode=0 Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.924909 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-664675cd85-bc4lp" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.924980 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-664675cd85-bc4lp" event={"ID":"a81700a8-2372-4b4d-a769-5b6936ac7aba","Type":"ContainerDied","Data":"e2b2843301100c20f6d491d59b77cb561092131dec8e408574b216fc6ab4b253"} Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.925032 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-664675cd85-bc4lp" event={"ID":"a81700a8-2372-4b4d-a769-5b6936ac7aba","Type":"ContainerDied","Data":"ac8bfcd3a5cee296b72d2edaa69ac0ed498ba1022de610726f72e26aa8dce8d9"} Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.925053 4797 scope.go:117] "RemoveContainer" containerID="5a4348f49bfc20b82cc85fd8e572727c854342e84dc19a338bc49230af9a08db" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.955054 4797 scope.go:117] "RemoveContainer" containerID="e2b2843301100c20f6d491d59b77cb561092131dec8e408574b216fc6ab4b253" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.981286 4797 scope.go:117] "RemoveContainer" containerID="5a4348f49bfc20b82cc85fd8e572727c854342e84dc19a338bc49230af9a08db" Feb 16 11:26:38 crc kubenswrapper[4797]: E0216 11:26:38.981810 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a4348f49bfc20b82cc85fd8e572727c854342e84dc19a338bc49230af9a08db\": container with ID starting with 5a4348f49bfc20b82cc85fd8e572727c854342e84dc19a338bc49230af9a08db not found: ID does not exist" containerID="5a4348f49bfc20b82cc85fd8e572727c854342e84dc19a338bc49230af9a08db" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.981880 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a4348f49bfc20b82cc85fd8e572727c854342e84dc19a338bc49230af9a08db"} err="failed to get container status \"5a4348f49bfc20b82cc85fd8e572727c854342e84dc19a338bc49230af9a08db\": rpc error: code = NotFound desc = could not find container \"5a4348f49bfc20b82cc85fd8e572727c854342e84dc19a338bc49230af9a08db\": container with ID starting with 5a4348f49bfc20b82cc85fd8e572727c854342e84dc19a338bc49230af9a08db not found: ID does not exist" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.981906 4797 scope.go:117] "RemoveContainer" containerID="e2b2843301100c20f6d491d59b77cb561092131dec8e408574b216fc6ab4b253" Feb 16 11:26:38 crc kubenswrapper[4797]: E0216 11:26:38.984017 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2b2843301100c20f6d491d59b77cb561092131dec8e408574b216fc6ab4b253\": container with ID starting with e2b2843301100c20f6d491d59b77cb561092131dec8e408574b216fc6ab4b253 not found: ID does not exist" containerID="e2b2843301100c20f6d491d59b77cb561092131dec8e408574b216fc6ab4b253" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.984089 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2b2843301100c20f6d491d59b77cb561092131dec8e408574b216fc6ab4b253"} err="failed to get container status \"e2b2843301100c20f6d491d59b77cb561092131dec8e408574b216fc6ab4b253\": rpc error: code = NotFound desc = could not find container \"e2b2843301100c20f6d491d59b77cb561092131dec8e408574b216fc6ab4b253\": container with ID starting with e2b2843301100c20f6d491d59b77cb561092131dec8e408574b216fc6ab4b253 not found: ID does not exist" Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.987607 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-664675cd85-bc4lp"] Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.997612 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-664675cd85-bc4lp"] Feb 16 11:26:38 crc kubenswrapper[4797]: I0216 11:26:38.999153 4797 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a81700a8-2372-4b4d-a769-5b6936ac7aba-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:39 crc kubenswrapper[4797]: I0216 11:26:39.013887 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7fb7cc766-tfhd7"] Feb 16 11:26:39 crc kubenswrapper[4797]: W0216 11:26:39.014744 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e8b9f5e_8e8c_4315_8cf6_70eeb304deb8.slice/crio-fc04831a0310b3b77f41bd1aac5598857a7c78757642b042a91da42ddbf1fca8 WatchSource:0}: Error finding container fc04831a0310b3b77f41bd1aac5598857a7c78757642b042a91da42ddbf1fca8: Status 404 returned error can't find the container with id fc04831a0310b3b77f41bd1aac5598857a7c78757642b042a91da42ddbf1fca8 Feb 16 11:26:39 crc kubenswrapper[4797]: I0216 11:26:39.942245 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb7cc766-tfhd7" event={"ID":"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8","Type":"ContainerStarted","Data":"d4c697f08ead3209f3162357b4a1c305c4818a8aca0f2f92de03a35c59d5229e"} Feb 16 11:26:39 crc kubenswrapper[4797]: I0216 11:26:39.942560 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb7cc766-tfhd7" event={"ID":"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8","Type":"ContainerStarted","Data":"06bc7cc2aee1edf38c4fde079da36aff03ed80dfa7d3fc056932dc1b755cee57"} Feb 16 11:26:39 crc kubenswrapper[4797]: I0216 11:26:39.942570 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7fb7cc766-tfhd7" event={"ID":"0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8","Type":"ContainerStarted","Data":"fc04831a0310b3b77f41bd1aac5598857a7c78757642b042a91da42ddbf1fca8"} Feb 16 11:26:39 crc kubenswrapper[4797]: I0216 11:26:39.943865 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:39 crc kubenswrapper[4797]: I0216 11:26:39.972553 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7fb7cc766-tfhd7" podStartSLOduration=1.972523349 podStartE2EDuration="1.972523349s" podCreationTimestamp="2026-02-16 11:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:39.963545914 +0000 UTC m=+1194.683730904" watchObservedRunningTime="2026-02-16 11:26:39.972523349 +0000 UTC m=+1194.692708329" Feb 16 11:26:39 crc kubenswrapper[4797]: I0216 11:26:39.993739 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a81700a8-2372-4b4d-a769-5b6936ac7aba" path="/var/lib/kubelet/pods/a81700a8-2372-4b4d-a769-5b6936ac7aba/volumes" Feb 16 11:26:40 crc kubenswrapper[4797]: I0216 11:26:40.895749 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:26:40 crc kubenswrapper[4797]: I0216 11:26:40.969738 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:26:40 crc kubenswrapper[4797]: I0216 11:26:40.970482 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-gl82w"] Feb 16 11:26:40 crc kubenswrapper[4797]: I0216 11:26:40.970744 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" podUID="b3eaa18d-dc3e-4499-b37e-58ff7449745f" containerName="dnsmasq-dns" containerID="cri-o://9f24964d47a93399207c03eb66d81768a0602b02efaf4509bb0b0e0c4e9025dd" gracePeriod=10 Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.099466 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.178417 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.394649 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 11:26:41 crc kubenswrapper[4797]: E0216 11:26:41.395362 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81700a8-2372-4b4d-a769-5b6936ac7aba" containerName="neutron-api" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.395374 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81700a8-2372-4b4d-a769-5b6936ac7aba" containerName="neutron-api" Feb 16 11:26:41 crc kubenswrapper[4797]: E0216 11:26:41.395417 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a81700a8-2372-4b4d-a769-5b6936ac7aba" containerName="neutron-httpd" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.395423 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="a81700a8-2372-4b4d-a769-5b6936ac7aba" containerName="neutron-httpd" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.395607 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81700a8-2372-4b4d-a769-5b6936ac7aba" containerName="neutron-api" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.395621 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="a81700a8-2372-4b4d-a769-5b6936ac7aba" containerName="neutron-httpd" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.396343 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.404965 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.405159 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.405167 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lk8jd" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.435654 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.566954 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5722\" (UniqueName: \"kubernetes.io/projected/33c9ce82-2d91-49fd-935c-18996e6ecc18-kube-api-access-l5722\") pod \"openstackclient\" (UID: \"33c9ce82-2d91-49fd-935c-18996e6ecc18\") " pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.567046 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/33c9ce82-2d91-49fd-935c-18996e6ecc18-openstack-config-secret\") pod \"openstackclient\" (UID: \"33c9ce82-2d91-49fd-935c-18996e6ecc18\") " pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.567087 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c9ce82-2d91-49fd-935c-18996e6ecc18-combined-ca-bundle\") pod \"openstackclient\" (UID: \"33c9ce82-2d91-49fd-935c-18996e6ecc18\") " pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.567210 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/33c9ce82-2d91-49fd-935c-18996e6ecc18-openstack-config\") pod \"openstackclient\" (UID: \"33c9ce82-2d91-49fd-935c-18996e6ecc18\") " pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.618915 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.671831 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5722\" (UniqueName: \"kubernetes.io/projected/33c9ce82-2d91-49fd-935c-18996e6ecc18-kube-api-access-l5722\") pod \"openstackclient\" (UID: \"33c9ce82-2d91-49fd-935c-18996e6ecc18\") " pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.671909 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/33c9ce82-2d91-49fd-935c-18996e6ecc18-openstack-config-secret\") pod \"openstackclient\" (UID: \"33c9ce82-2d91-49fd-935c-18996e6ecc18\") " pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.671938 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c9ce82-2d91-49fd-935c-18996e6ecc18-combined-ca-bundle\") pod \"openstackclient\" (UID: \"33c9ce82-2d91-49fd-935c-18996e6ecc18\") " pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.672053 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/33c9ce82-2d91-49fd-935c-18996e6ecc18-openstack-config\") pod \"openstackclient\" (UID: \"33c9ce82-2d91-49fd-935c-18996e6ecc18\") " pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.672997 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/33c9ce82-2d91-49fd-935c-18996e6ecc18-openstack-config\") pod \"openstackclient\" (UID: \"33c9ce82-2d91-49fd-935c-18996e6ecc18\") " pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.684277 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/33c9ce82-2d91-49fd-935c-18996e6ecc18-openstack-config-secret\") pod \"openstackclient\" (UID: \"33c9ce82-2d91-49fd-935c-18996e6ecc18\") " pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.691819 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33c9ce82-2d91-49fd-935c-18996e6ecc18-combined-ca-bundle\") pod \"openstackclient\" (UID: \"33c9ce82-2d91-49fd-935c-18996e6ecc18\") " pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.702010 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5722\" (UniqueName: \"kubernetes.io/projected/33c9ce82-2d91-49fd-935c-18996e6ecc18-kube-api-access-l5722\") pod \"openstackclient\" (UID: \"33c9ce82-2d91-49fd-935c-18996e6ecc18\") " pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.703076 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.703203 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.766649 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.787338 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-ovsdbserver-nb\") pod \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.787388 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-ovsdbserver-sb\") pod \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.787494 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mht4\" (UniqueName: \"kubernetes.io/projected/b3eaa18d-dc3e-4499-b37e-58ff7449745f-kube-api-access-2mht4\") pod \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.787594 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-dns-svc\") pod \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.787618 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-config\") pod \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.787715 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-dns-swift-storage-0\") pod \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\" (UID: \"b3eaa18d-dc3e-4499-b37e-58ff7449745f\") " Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.792497 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3eaa18d-dc3e-4499-b37e-58ff7449745f-kube-api-access-2mht4" (OuterVolumeSpecName: "kube-api-access-2mht4") pod "b3eaa18d-dc3e-4499-b37e-58ff7449745f" (UID: "b3eaa18d-dc3e-4499-b37e-58ff7449745f"). InnerVolumeSpecName "kube-api-access-2mht4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.859323 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b3eaa18d-dc3e-4499-b37e-58ff7449745f" (UID: "b3eaa18d-dc3e-4499-b37e-58ff7449745f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.883599 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b3eaa18d-dc3e-4499-b37e-58ff7449745f" (UID: "b3eaa18d-dc3e-4499-b37e-58ff7449745f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.884299 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b3eaa18d-dc3e-4499-b37e-58ff7449745f" (UID: "b3eaa18d-dc3e-4499-b37e-58ff7449745f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.884313 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b3eaa18d-dc3e-4499-b37e-58ff7449745f" (UID: "b3eaa18d-dc3e-4499-b37e-58ff7449745f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.884569 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-config" (OuterVolumeSpecName: "config") pod "b3eaa18d-dc3e-4499-b37e-58ff7449745f" (UID: "b3eaa18d-dc3e-4499-b37e-58ff7449745f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.889588 4797 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.889626 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.889635 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.889645 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mht4\" (UniqueName: \"kubernetes.io/projected/b3eaa18d-dc3e-4499-b37e-58ff7449745f-kube-api-access-2mht4\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.889656 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.889665 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3eaa18d-dc3e-4499-b37e-58ff7449745f-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.997254 4797 generic.go:334] "Generic (PLEG): container finished" podID="b3eaa18d-dc3e-4499-b37e-58ff7449745f" containerID="9f24964d47a93399207c03eb66d81768a0602b02efaf4509bb0b0e0c4e9025dd" exitCode=0 Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.997682 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.998171 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5cb0a42d-c78d-40ae-b936-7ba7b7749437" containerName="probe" containerID="cri-o://2c42d243166c17cd9039170eede22fe00162e89eb64f9baebf17403f7824232c" gracePeriod=30 Feb 16 11:26:41 crc kubenswrapper[4797]: I0216 11:26:41.998159 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5cb0a42d-c78d-40ae-b936-7ba7b7749437" containerName="cinder-scheduler" containerID="cri-o://39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71" gracePeriod=30 Feb 16 11:26:42 crc kubenswrapper[4797]: I0216 11:26:42.004107 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" event={"ID":"b3eaa18d-dc3e-4499-b37e-58ff7449745f","Type":"ContainerDied","Data":"9f24964d47a93399207c03eb66d81768a0602b02efaf4509bb0b0e0c4e9025dd"} Feb 16 11:26:42 crc kubenswrapper[4797]: I0216 11:26:42.004148 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" event={"ID":"b3eaa18d-dc3e-4499-b37e-58ff7449745f","Type":"ContainerDied","Data":"40c4eeeb11b9f8092b9bb0790192211432cd1084a5bb71f758478d287f8f4770"} Feb 16 11:26:42 crc kubenswrapper[4797]: I0216 11:26:42.004166 4797 scope.go:117] "RemoveContainer" containerID="9f24964d47a93399207c03eb66d81768a0602b02efaf4509bb0b0e0c4e9025dd" Feb 16 11:26:42 crc kubenswrapper[4797]: I0216 11:26:42.038729 4797 scope.go:117] "RemoveContainer" containerID="5e1c07b2c7d8702b0cd7274d70f562245d35f11aa92c84686475cca68ed28f46" Feb 16 11:26:42 crc kubenswrapper[4797]: I0216 11:26:42.070895 4797 scope.go:117] "RemoveContainer" containerID="9f24964d47a93399207c03eb66d81768a0602b02efaf4509bb0b0e0c4e9025dd" Feb 16 11:26:42 crc kubenswrapper[4797]: E0216 11:26:42.071357 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f24964d47a93399207c03eb66d81768a0602b02efaf4509bb0b0e0c4e9025dd\": container with ID starting with 9f24964d47a93399207c03eb66d81768a0602b02efaf4509bb0b0e0c4e9025dd not found: ID does not exist" containerID="9f24964d47a93399207c03eb66d81768a0602b02efaf4509bb0b0e0c4e9025dd" Feb 16 11:26:42 crc kubenswrapper[4797]: I0216 11:26:42.071388 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f24964d47a93399207c03eb66d81768a0602b02efaf4509bb0b0e0c4e9025dd"} err="failed to get container status \"9f24964d47a93399207c03eb66d81768a0602b02efaf4509bb0b0e0c4e9025dd\": rpc error: code = NotFound desc = could not find container \"9f24964d47a93399207c03eb66d81768a0602b02efaf4509bb0b0e0c4e9025dd\": container with ID starting with 9f24964d47a93399207c03eb66d81768a0602b02efaf4509bb0b0e0c4e9025dd not found: ID does not exist" Feb 16 11:26:42 crc kubenswrapper[4797]: I0216 11:26:42.071414 4797 scope.go:117] "RemoveContainer" containerID="5e1c07b2c7d8702b0cd7274d70f562245d35f11aa92c84686475cca68ed28f46" Feb 16 11:26:42 crc kubenswrapper[4797]: E0216 11:26:42.071779 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e1c07b2c7d8702b0cd7274d70f562245d35f11aa92c84686475cca68ed28f46\": container with ID starting with 5e1c07b2c7d8702b0cd7274d70f562245d35f11aa92c84686475cca68ed28f46 not found: ID does not exist" containerID="5e1c07b2c7d8702b0cd7274d70f562245d35f11aa92c84686475cca68ed28f46" Feb 16 11:26:42 crc kubenswrapper[4797]: I0216 11:26:42.071801 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e1c07b2c7d8702b0cd7274d70f562245d35f11aa92c84686475cca68ed28f46"} err="failed to get container status \"5e1c07b2c7d8702b0cd7274d70f562245d35f11aa92c84686475cca68ed28f46\": rpc error: code = NotFound desc = could not find container \"5e1c07b2c7d8702b0cd7274d70f562245d35f11aa92c84686475cca68ed28f46\": container with ID starting with 5e1c07b2c7d8702b0cd7274d70f562245d35f11aa92c84686475cca68ed28f46 not found: ID does not exist" Feb 16 11:26:42 crc kubenswrapper[4797]: I0216 11:26:42.262429 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 11:26:43 crc kubenswrapper[4797]: I0216 11:26:43.014666 4797 generic.go:334] "Generic (PLEG): container finished" podID="5cb0a42d-c78d-40ae-b936-7ba7b7749437" containerID="2c42d243166c17cd9039170eede22fe00162e89eb64f9baebf17403f7824232c" exitCode=0 Feb 16 11:26:43 crc kubenswrapper[4797]: I0216 11:26:43.014742 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5cb0a42d-c78d-40ae-b936-7ba7b7749437","Type":"ContainerDied","Data":"2c42d243166c17cd9039170eede22fe00162e89eb64f9baebf17403f7824232c"} Feb 16 11:26:43 crc kubenswrapper[4797]: I0216 11:26:43.017489 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"33c9ce82-2d91-49fd-935c-18996e6ecc18","Type":"ContainerStarted","Data":"9103b6cc26ce75734f2fc397c7cca128a8e5ca4e3a13d13af1f6d372e14c6dca"} Feb 16 11:26:43 crc kubenswrapper[4797]: I0216 11:26:43.749390 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 11:26:44 crc kubenswrapper[4797]: E0216 11:26:44.296925 4797 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cb0a42d_c78d_40ae_b936_7ba7b7749437.slice/crio-conmon-39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cb0a42d_c78d_40ae_b936_7ba7b7749437.slice/crio-39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71.scope\": RecentStats: unable to find data in memory cache]" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.663940 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.759180 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-config-data-custom\") pod \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.759354 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5cb0a42d-c78d-40ae-b936-7ba7b7749437-etc-machine-id\") pod \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.759409 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v52g6\" (UniqueName: \"kubernetes.io/projected/5cb0a42d-c78d-40ae-b936-7ba7b7749437-kube-api-access-v52g6\") pod \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.759481 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-scripts\") pod \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.759611 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-config-data\") pod \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.759694 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-combined-ca-bundle\") pod \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\" (UID: \"5cb0a42d-c78d-40ae-b936-7ba7b7749437\") " Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.759471 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cb0a42d-c78d-40ae-b936-7ba7b7749437-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5cb0a42d-c78d-40ae-b936-7ba7b7749437" (UID: "5cb0a42d-c78d-40ae-b936-7ba7b7749437"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.765707 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cb0a42d-c78d-40ae-b936-7ba7b7749437-kube-api-access-v52g6" (OuterVolumeSpecName: "kube-api-access-v52g6") pod "5cb0a42d-c78d-40ae-b936-7ba7b7749437" (UID: "5cb0a42d-c78d-40ae-b936-7ba7b7749437"). InnerVolumeSpecName "kube-api-access-v52g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.766495 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-scripts" (OuterVolumeSpecName: "scripts") pod "5cb0a42d-c78d-40ae-b936-7ba7b7749437" (UID: "5cb0a42d-c78d-40ae-b936-7ba7b7749437"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.766515 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5cb0a42d-c78d-40ae-b936-7ba7b7749437" (UID: "5cb0a42d-c78d-40ae-b936-7ba7b7749437"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.803994 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-64d4c9c779-ctrqz"] Feb 16 11:26:44 crc kubenswrapper[4797]: E0216 11:26:44.804375 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb0a42d-c78d-40ae-b936-7ba7b7749437" containerName="cinder-scheduler" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.804392 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb0a42d-c78d-40ae-b936-7ba7b7749437" containerName="cinder-scheduler" Feb 16 11:26:44 crc kubenswrapper[4797]: E0216 11:26:44.804428 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cb0a42d-c78d-40ae-b936-7ba7b7749437" containerName="probe" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.804434 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cb0a42d-c78d-40ae-b936-7ba7b7749437" containerName="probe" Feb 16 11:26:44 crc kubenswrapper[4797]: E0216 11:26:44.804446 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3eaa18d-dc3e-4499-b37e-58ff7449745f" containerName="dnsmasq-dns" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.804452 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3eaa18d-dc3e-4499-b37e-58ff7449745f" containerName="dnsmasq-dns" Feb 16 11:26:44 crc kubenswrapper[4797]: E0216 11:26:44.804465 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3eaa18d-dc3e-4499-b37e-58ff7449745f" containerName="init" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.804471 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3eaa18d-dc3e-4499-b37e-58ff7449745f" containerName="init" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.804658 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3eaa18d-dc3e-4499-b37e-58ff7449745f" containerName="dnsmasq-dns" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.804675 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cb0a42d-c78d-40ae-b936-7ba7b7749437" containerName="probe" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.804690 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cb0a42d-c78d-40ae-b936-7ba7b7749437" containerName="cinder-scheduler" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.810481 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.816232 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.816513 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.816685 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.852301 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-64d4c9c779-ctrqz"] Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.865935 4797 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.865973 4797 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5cb0a42d-c78d-40ae-b936-7ba7b7749437-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.865982 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v52g6\" (UniqueName: \"kubernetes.io/projected/5cb0a42d-c78d-40ae-b936-7ba7b7749437-kube-api-access-v52g6\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.865992 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.882870 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5cb0a42d-c78d-40ae-b936-7ba7b7749437" (UID: "5cb0a42d-c78d-40ae-b936-7ba7b7749437"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.962410 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-config-data" (OuterVolumeSpecName: "config-data") pod "5cb0a42d-c78d-40ae-b936-7ba7b7749437" (UID: "5cb0a42d-c78d-40ae-b936-7ba7b7749437"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.970229 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-combined-ca-bundle\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.970342 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-public-tls-certs\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.970386 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-run-httpd\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.970414 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-internal-tls-certs\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.970438 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncsrd\" (UniqueName: \"kubernetes.io/projected/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-kube-api-access-ncsrd\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.970475 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-config-data\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.970508 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-log-httpd\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.970587 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-etc-swift\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.971005 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:44 crc kubenswrapper[4797]: I0216 11:26:44.971039 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cb0a42d-c78d-40ae-b936-7ba7b7749437-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.049809 4797 generic.go:334] "Generic (PLEG): container finished" podID="5cb0a42d-c78d-40ae-b936-7ba7b7749437" containerID="39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71" exitCode=0 Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.049879 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.050172 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5cb0a42d-c78d-40ae-b936-7ba7b7749437","Type":"ContainerDied","Data":"39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71"} Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.050288 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5cb0a42d-c78d-40ae-b936-7ba7b7749437","Type":"ContainerDied","Data":"93b7be5474507e2953a5342c820215176751bfb1729a929e850134baf336bcc5"} Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.050380 4797 scope.go:117] "RemoveContainer" containerID="2c42d243166c17cd9039170eede22fe00162e89eb64f9baebf17403f7824232c" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.073101 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-combined-ca-bundle\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.073499 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-public-tls-certs\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.073644 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-run-httpd\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.073734 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-internal-tls-certs\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.073813 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncsrd\" (UniqueName: \"kubernetes.io/projected/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-kube-api-access-ncsrd\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.073913 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-config-data\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.074042 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-log-httpd\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.074187 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-etc-swift\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.075341 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-log-httpd\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.073527 4797 scope.go:117] "RemoveContainer" containerID="39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.076267 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-run-httpd\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.078555 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-public-tls-certs\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.079151 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-etc-swift\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.080870 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-combined-ca-bundle\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.088539 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-config-data\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.093377 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-internal-tls-certs\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.099852 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncsrd\" (UniqueName: \"kubernetes.io/projected/8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd-kube-api-access-ncsrd\") pod \"swift-proxy-64d4c9c779-ctrqz\" (UID: \"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd\") " pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.117616 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.126787 4797 scope.go:117] "RemoveContainer" containerID="2c42d243166c17cd9039170eede22fe00162e89eb64f9baebf17403f7824232c" Feb 16 11:26:45 crc kubenswrapper[4797]: E0216 11:26:45.127441 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c42d243166c17cd9039170eede22fe00162e89eb64f9baebf17403f7824232c\": container with ID starting with 2c42d243166c17cd9039170eede22fe00162e89eb64f9baebf17403f7824232c not found: ID does not exist" containerID="2c42d243166c17cd9039170eede22fe00162e89eb64f9baebf17403f7824232c" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.127559 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c42d243166c17cd9039170eede22fe00162e89eb64f9baebf17403f7824232c"} err="failed to get container status \"2c42d243166c17cd9039170eede22fe00162e89eb64f9baebf17403f7824232c\": rpc error: code = NotFound desc = could not find container \"2c42d243166c17cd9039170eede22fe00162e89eb64f9baebf17403f7824232c\": container with ID starting with 2c42d243166c17cd9039170eede22fe00162e89eb64f9baebf17403f7824232c not found: ID does not exist" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.127719 4797 scope.go:117] "RemoveContainer" containerID="39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71" Feb 16 11:26:45 crc kubenswrapper[4797]: E0216 11:26:45.127987 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71\": container with ID starting with 39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71 not found: ID does not exist" containerID="39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.128098 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71"} err="failed to get container status \"39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71\": rpc error: code = NotFound desc = could not find container \"39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71\": container with ID starting with 39eb620b180ef53e564bc8d91d39dc5c9af51efe7d18115b18f237922aa69a71 not found: ID does not exist" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.157716 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.171878 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.174479 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.176861 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.192765 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.207506 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.281264 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz5jj\" (UniqueName: \"kubernetes.io/projected/6e588df8-cba6-4258-bead-5c3523b99023-kube-api-access-gz5jj\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.281444 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e588df8-cba6-4258-bead-5c3523b99023-scripts\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.281915 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e588df8-cba6-4258-bead-5c3523b99023-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.282204 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e588df8-cba6-4258-bead-5c3523b99023-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.282731 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e588df8-cba6-4258-bead-5c3523b99023-config-data\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.282850 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e588df8-cba6-4258-bead-5c3523b99023-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.385566 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e588df8-cba6-4258-bead-5c3523b99023-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.385639 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e588df8-cba6-4258-bead-5c3523b99023-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.385674 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e588df8-cba6-4258-bead-5c3523b99023-config-data\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.385680 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e588df8-cba6-4258-bead-5c3523b99023-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.385711 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e588df8-cba6-4258-bead-5c3523b99023-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.385860 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz5jj\" (UniqueName: \"kubernetes.io/projected/6e588df8-cba6-4258-bead-5c3523b99023-kube-api-access-gz5jj\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.386008 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e588df8-cba6-4258-bead-5c3523b99023-scripts\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.398207 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e588df8-cba6-4258-bead-5c3523b99023-config-data\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.407146 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e588df8-cba6-4258-bead-5c3523b99023-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.411061 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e588df8-cba6-4258-bead-5c3523b99023-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.417157 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz5jj\" (UniqueName: \"kubernetes.io/projected/6e588df8-cba6-4258-bead-5c3523b99023-kube-api-access-gz5jj\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.427920 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e588df8-cba6-4258-bead-5c3523b99023-scripts\") pod \"cinder-scheduler-0\" (UID: \"6e588df8-cba6-4258-bead-5c3523b99023\") " pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.506085 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.896602 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-64d4c9c779-ctrqz"] Feb 16 11:26:45 crc kubenswrapper[4797]: W0216 11:26:45.911807 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ccabb0a_9a9d_4535_9075_b9ff8bc59dbd.slice/crio-1c5737b0dc126815b0093ffdfe723c3e459f6792e010680763f966b1e44c9ee8 WatchSource:0}: Error finding container 1c5737b0dc126815b0093ffdfe723c3e459f6792e010680763f966b1e44c9ee8: Status 404 returned error can't find the container with id 1c5737b0dc126815b0093ffdfe723c3e459f6792e010680763f966b1e44c9ee8 Feb 16 11:26:45 crc kubenswrapper[4797]: E0216 11:26:45.997814 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:26:45 crc kubenswrapper[4797]: I0216 11:26:45.997917 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cb0a42d-c78d-40ae-b936-7ba7b7749437" path="/var/lib/kubelet/pods/5cb0a42d-c78d-40ae-b936-7ba7b7749437/volumes" Feb 16 11:26:46 crc kubenswrapper[4797]: I0216 11:26:46.081853 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64d4c9c779-ctrqz" event={"ID":"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd","Type":"ContainerStarted","Data":"1c5737b0dc126815b0093ffdfe723c3e459f6792e010680763f966b1e44c9ee8"} Feb 16 11:26:46 crc kubenswrapper[4797]: I0216 11:26:46.182879 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 11:26:46 crc kubenswrapper[4797]: I0216 11:26:46.777590 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:26:46 crc kubenswrapper[4797]: I0216 11:26:46.777853 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="ceilometer-central-agent" containerID="cri-o://9309714b13cc4ae0e604631d6cb4f4f2a2e22140f414242db05755f7b894689b" gracePeriod=30 Feb 16 11:26:46 crc kubenswrapper[4797]: I0216 11:26:46.778240 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="proxy-httpd" containerID="cri-o://61b0101d153c0a1978f9b731c493f10ade2e6d712b3b9ee6f5fa6f2b0547916e" gracePeriod=30 Feb 16 11:26:46 crc kubenswrapper[4797]: I0216 11:26:46.778280 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="sg-core" containerID="cri-o://2b9b55ead49dd0329c8b513566c2c3243b20c84378a7145d82cac73c7e1f245a" gracePeriod=30 Feb 16 11:26:46 crc kubenswrapper[4797]: I0216 11:26:46.778314 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="ceilometer-notification-agent" containerID="cri-o://56d10d8514d89b841ab2c5557e81b8cf55e9197e50a392c6084db042d8e5161c" gracePeriod=30 Feb 16 11:26:47 crc kubenswrapper[4797]: I0216 11:26:47.098085 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e588df8-cba6-4258-bead-5c3523b99023","Type":"ContainerStarted","Data":"ba591fdb5b310e8f06e0aac0d6112f5f27a621afe797b61b2264f5c30ed5dee3"} Feb 16 11:26:47 crc kubenswrapper[4797]: I0216 11:26:47.098417 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e588df8-cba6-4258-bead-5c3523b99023","Type":"ContainerStarted","Data":"01ea9b449bd98186dcf148c88d201ef448647daf28985f939f05dacce121a4fd"} Feb 16 11:26:47 crc kubenswrapper[4797]: I0216 11:26:47.100621 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64d4c9c779-ctrqz" event={"ID":"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd","Type":"ContainerStarted","Data":"28a2eb12fb7bd8ce148a5c46cee3f1fb7aa5bef9105a0d59557ac3daad7420f1"} Feb 16 11:26:47 crc kubenswrapper[4797]: I0216 11:26:47.100682 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64d4c9c779-ctrqz" event={"ID":"8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd","Type":"ContainerStarted","Data":"27d57ba3090c313a4d2667934999f61b2ae8d563c83fec68606d98094340e4c7"} Feb 16 11:26:47 crc kubenswrapper[4797]: I0216 11:26:47.101185 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:47 crc kubenswrapper[4797]: I0216 11:26:47.101260 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:47 crc kubenswrapper[4797]: I0216 11:26:47.110311 4797 generic.go:334] "Generic (PLEG): container finished" podID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerID="61b0101d153c0a1978f9b731c493f10ade2e6d712b3b9ee6f5fa6f2b0547916e" exitCode=0 Feb 16 11:26:47 crc kubenswrapper[4797]: I0216 11:26:47.110355 4797 generic.go:334] "Generic (PLEG): container finished" podID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerID="2b9b55ead49dd0329c8b513566c2c3243b20c84378a7145d82cac73c7e1f245a" exitCode=2 Feb 16 11:26:47 crc kubenswrapper[4797]: I0216 11:26:47.110383 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a6c5fd9-4966-4a79-8d43-dd87cf706681","Type":"ContainerDied","Data":"61b0101d153c0a1978f9b731c493f10ade2e6d712b3b9ee6f5fa6f2b0547916e"} Feb 16 11:26:47 crc kubenswrapper[4797]: I0216 11:26:47.110415 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a6c5fd9-4966-4a79-8d43-dd87cf706681","Type":"ContainerDied","Data":"2b9b55ead49dd0329c8b513566c2c3243b20c84378a7145d82cac73c7e1f245a"} Feb 16 11:26:47 crc kubenswrapper[4797]: I0216 11:26:47.119392 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-64d4c9c779-ctrqz" podStartSLOduration=3.119369319 podStartE2EDuration="3.119369319s" podCreationTimestamp="2026-02-16 11:26:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:47.11902203 +0000 UTC m=+1201.839207010" watchObservedRunningTime="2026-02-16 11:26:47.119369319 +0000 UTC m=+1201.839554309" Feb 16 11:26:48 crc kubenswrapper[4797]: I0216 11:26:48.125752 4797 generic.go:334] "Generic (PLEG): container finished" podID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerID="56d10d8514d89b841ab2c5557e81b8cf55e9197e50a392c6084db042d8e5161c" exitCode=0 Feb 16 11:26:48 crc kubenswrapper[4797]: I0216 11:26:48.126147 4797 generic.go:334] "Generic (PLEG): container finished" podID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerID="9309714b13cc4ae0e604631d6cb4f4f2a2e22140f414242db05755f7b894689b" exitCode=0 Feb 16 11:26:48 crc kubenswrapper[4797]: I0216 11:26:48.126186 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a6c5fd9-4966-4a79-8d43-dd87cf706681","Type":"ContainerDied","Data":"56d10d8514d89b841ab2c5557e81b8cf55e9197e50a392c6084db042d8e5161c"} Feb 16 11:26:48 crc kubenswrapper[4797]: I0216 11:26:48.126210 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a6c5fd9-4966-4a79-8d43-dd87cf706681","Type":"ContainerDied","Data":"9309714b13cc4ae0e604631d6cb4f4f2a2e22140f414242db05755f7b894689b"} Feb 16 11:26:48 crc kubenswrapper[4797]: I0216 11:26:48.128810 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e588df8-cba6-4258-bead-5c3523b99023","Type":"ContainerStarted","Data":"2c84b36a120662bb7af63e6ca5c3631b76cec1da0bee469d1bd87d3dabd09827"} Feb 16 11:26:48 crc kubenswrapper[4797]: I0216 11:26:48.148210 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.148188657 podStartE2EDuration="3.148188657s" podCreationTimestamp="2026-02-16 11:26:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:48.144367922 +0000 UTC m=+1202.864552902" watchObservedRunningTime="2026-02-16 11:26:48.148188657 +0000 UTC m=+1202.868373637" Feb 16 11:26:50 crc kubenswrapper[4797]: I0216 11:26:50.506256 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 11:26:52 crc kubenswrapper[4797]: I0216 11:26:52.289102 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:26:52 crc kubenswrapper[4797]: I0216 11:26:52.289661 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7692da44-8fff-4c27-8069-4278620f1d55" containerName="glance-log" containerID="cri-o://0b44b44e4ec5a8f588432c89ba3976eba18143ce5194edf83079a6351b89e0a1" gracePeriod=30 Feb 16 11:26:52 crc kubenswrapper[4797]: I0216 11:26:52.289797 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="7692da44-8fff-4c27-8069-4278620f1d55" containerName="glance-httpd" containerID="cri-o://068392753fe07fe9efc2d75cfdef54388401fa6ff9d164599a854dbaeb33fb60" gracePeriod=30 Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.189202 4797 generic.go:334] "Generic (PLEG): container finished" podID="7692da44-8fff-4c27-8069-4278620f1d55" containerID="0b44b44e4ec5a8f588432c89ba3976eba18143ce5194edf83079a6351b89e0a1" exitCode=143 Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.189340 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7692da44-8fff-4c27-8069-4278620f1d55","Type":"ContainerDied","Data":"0b44b44e4ec5a8f588432c89ba3976eba18143ce5194edf83079a6351b89e0a1"} Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.192519 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a6c5fd9-4966-4a79-8d43-dd87cf706681","Type":"ContainerDied","Data":"71c21c7399b13e29a5d4b8a551057e956678efd13eeb76db51d74f64b3b30ace"} Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.192559 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71c21c7399b13e29a5d4b8a551057e956678efd13eeb76db51d74f64b3b30ace" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.284257 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.476101 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-combined-ca-bundle\") pod \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.476174 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwvk7\" (UniqueName: \"kubernetes.io/projected/5a6c5fd9-4966-4a79-8d43-dd87cf706681-kube-api-access-rwvk7\") pod \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.476199 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-scripts\") pod \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.476299 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a6c5fd9-4966-4a79-8d43-dd87cf706681-log-httpd\") pod \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.476377 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a6c5fd9-4966-4a79-8d43-dd87cf706681-run-httpd\") pod \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.476447 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-sg-core-conf-yaml\") pod \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.476506 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-config-data\") pod \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\" (UID: \"5a6c5fd9-4966-4a79-8d43-dd87cf706681\") " Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.476799 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a6c5fd9-4966-4a79-8d43-dd87cf706681-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5a6c5fd9-4966-4a79-8d43-dd87cf706681" (UID: "5a6c5fd9-4966-4a79-8d43-dd87cf706681"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.477060 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a6c5fd9-4966-4a79-8d43-dd87cf706681-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5a6c5fd9-4966-4a79-8d43-dd87cf706681" (UID: "5a6c5fd9-4966-4a79-8d43-dd87cf706681"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.477151 4797 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a6c5fd9-4966-4a79-8d43-dd87cf706681-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.477172 4797 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a6c5fd9-4966-4a79-8d43-dd87cf706681-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.480463 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-scripts" (OuterVolumeSpecName: "scripts") pod "5a6c5fd9-4966-4a79-8d43-dd87cf706681" (UID: "5a6c5fd9-4966-4a79-8d43-dd87cf706681"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.486054 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a6c5fd9-4966-4a79-8d43-dd87cf706681-kube-api-access-rwvk7" (OuterVolumeSpecName: "kube-api-access-rwvk7") pod "5a6c5fd9-4966-4a79-8d43-dd87cf706681" (UID: "5a6c5fd9-4966-4a79-8d43-dd87cf706681"). InnerVolumeSpecName "kube-api-access-rwvk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.514801 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5a6c5fd9-4966-4a79-8d43-dd87cf706681" (UID: "5a6c5fd9-4966-4a79-8d43-dd87cf706681"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.515448 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.516698 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e45357a0-d18d-4114-8598-5fa948443f32" containerName="glance-log" containerID="cri-o://51db9dc1b0d260bf1e92f8a916482c8455108baa9c2c5abd2ff62f98e5830b87" gracePeriod=30 Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.516777 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e45357a0-d18d-4114-8598-5fa948443f32" containerName="glance-httpd" containerID="cri-o://2eb882a209ba222ac4a28467a60e6142ecb4307f204e350317ebd67b229d7496" gracePeriod=30 Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.579922 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwvk7\" (UniqueName: \"kubernetes.io/projected/5a6c5fd9-4966-4a79-8d43-dd87cf706681-kube-api-access-rwvk7\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.579975 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.579985 4797 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.619674 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a6c5fd9-4966-4a79-8d43-dd87cf706681" (UID: "5a6c5fd9-4966-4a79-8d43-dd87cf706681"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.629062 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-config-data" (OuterVolumeSpecName: "config-data") pod "5a6c5fd9-4966-4a79-8d43-dd87cf706681" (UID: "5a6c5fd9-4966-4a79-8d43-dd87cf706681"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.681437 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:53 crc kubenswrapper[4797]: I0216 11:26:53.681473 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6c5fd9-4966-4a79-8d43-dd87cf706681-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.202934 4797 generic.go:334] "Generic (PLEG): container finished" podID="e45357a0-d18d-4114-8598-5fa948443f32" containerID="51db9dc1b0d260bf1e92f8a916482c8455108baa9c2c5abd2ff62f98e5830b87" exitCode=143 Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.203024 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e45357a0-d18d-4114-8598-5fa948443f32","Type":"ContainerDied","Data":"51db9dc1b0d260bf1e92f8a916482c8455108baa9c2c5abd2ff62f98e5830b87"} Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.206287 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.207541 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"33c9ce82-2d91-49fd-935c-18996e6ecc18","Type":"ContainerStarted","Data":"26351e9588a5e6042da6cb76281a3df1ff39cca601f7accc505c11062743a994"} Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.228922 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.212475498 podStartE2EDuration="13.228886708s" podCreationTimestamp="2026-02-16 11:26:41 +0000 UTC" firstStartedPulling="2026-02-16 11:26:42.274083943 +0000 UTC m=+1196.994268923" lastFinishedPulling="2026-02-16 11:26:53.290495153 +0000 UTC m=+1208.010680133" observedRunningTime="2026-02-16 11:26:54.226412101 +0000 UTC m=+1208.946597091" watchObservedRunningTime="2026-02-16 11:26:54.228886708 +0000 UTC m=+1208.949071688" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.259392 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.271287 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.281908 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:26:54 crc kubenswrapper[4797]: E0216 11:26:54.282315 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="sg-core" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.282331 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="sg-core" Feb 16 11:26:54 crc kubenswrapper[4797]: E0216 11:26:54.282361 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="ceilometer-central-agent" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.282368 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="ceilometer-central-agent" Feb 16 11:26:54 crc kubenswrapper[4797]: E0216 11:26:54.282378 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="ceilometer-notification-agent" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.282386 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="ceilometer-notification-agent" Feb 16 11:26:54 crc kubenswrapper[4797]: E0216 11:26:54.282394 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="proxy-httpd" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.282399 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="proxy-httpd" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.282569 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="ceilometer-notification-agent" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.282603 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="sg-core" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.282613 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="proxy-httpd" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.282638 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" containerName="ceilometer-central-agent" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.284745 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.288750 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.289119 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.298109 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.391957 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08a6aa9f-aab5-4565-8db9-aacd187eee44-log-httpd\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.392060 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-scripts\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.392122 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.392158 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.392208 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08a6aa9f-aab5-4565-8db9-aacd187eee44-run-httpd\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.392261 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmq4w\" (UniqueName: \"kubernetes.io/projected/08a6aa9f-aab5-4565-8db9-aacd187eee44-kube-api-access-dmq4w\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.392341 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-config-data\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.493719 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-config-data\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.493798 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08a6aa9f-aab5-4565-8db9-aacd187eee44-log-httpd\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.493878 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-scripts\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.494322 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.494365 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.494648 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08a6aa9f-aab5-4565-8db9-aacd187eee44-log-httpd\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.494679 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08a6aa9f-aab5-4565-8db9-aacd187eee44-run-httpd\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.494848 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmq4w\" (UniqueName: \"kubernetes.io/projected/08a6aa9f-aab5-4565-8db9-aacd187eee44-kube-api-access-dmq4w\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.494982 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08a6aa9f-aab5-4565-8db9-aacd187eee44-run-httpd\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.498736 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-scripts\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.500703 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-config-data\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.501096 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.501474 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.517359 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmq4w\" (UniqueName: \"kubernetes.io/projected/08a6aa9f-aab5-4565-8db9-aacd187eee44-kube-api-access-dmq4w\") pod \"ceilometer-0\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.613103 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:26:54 crc kubenswrapper[4797]: I0216 11:26:54.617935 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.079737 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:26:55 crc kubenswrapper[4797]: W0216 11:26:55.081853 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08a6aa9f_aab5_4565_8db9_aacd187eee44.slice/crio-1b752c996aec9da8dad42e9cfc944c47fbab47b0a130292302cbbfd06d09cd32 WatchSource:0}: Error finding container 1b752c996aec9da8dad42e9cfc944c47fbab47b0a130292302cbbfd06d09cd32: Status 404 returned error can't find the container with id 1b752c996aec9da8dad42e9cfc944c47fbab47b0a130292302cbbfd06d09cd32 Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.086664 4797 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.214376 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.217001 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08a6aa9f-aab5-4565-8db9-aacd187eee44","Type":"ContainerStarted","Data":"1b752c996aec9da8dad42e9cfc944c47fbab47b0a130292302cbbfd06d09cd32"} Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.236676 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-64d4c9c779-ctrqz" Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.719111 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-gbj9w"] Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.725366 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gbj9w" Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.738385 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gbj9w"] Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.867967 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2855-account-create-update-r99n9"] Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.869309 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2855-account-create-update-r99n9" Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.874179 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.887008 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2855-account-create-update-r99n9"] Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.928682 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa2e02e2-0ab2-4f24-8bae-e8613132a219-operator-scripts\") pod \"nova-api-db-create-gbj9w\" (UID: \"fa2e02e2-0ab2-4f24-8bae-e8613132a219\") " pod="openstack/nova-api-db-create-gbj9w" Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.928721 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxrqj\" (UniqueName: \"kubernetes.io/projected/fa2e02e2-0ab2-4f24-8bae-e8613132a219-kube-api-access-vxrqj\") pod \"nova-api-db-create-gbj9w\" (UID: \"fa2e02e2-0ab2-4f24-8bae-e8613132a219\") " pod="openstack/nova-api-db-create-gbj9w" Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.951768 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-f7hz9"] Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.953053 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f7hz9" Feb 16 11:26:55 crc kubenswrapper[4797]: I0216 11:26:55.972046 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-f7hz9"] Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.029664 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a6c5fd9-4966-4a79-8d43-dd87cf706681" path="/var/lib/kubelet/pods/5a6c5fd9-4966-4a79-8d43-dd87cf706681/volumes" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.031999 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa2e02e2-0ab2-4f24-8bae-e8613132a219-operator-scripts\") pod \"nova-api-db-create-gbj9w\" (UID: \"fa2e02e2-0ab2-4f24-8bae-e8613132a219\") " pod="openstack/nova-api-db-create-gbj9w" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.032033 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881f97b5-ecc4-4032-97f0-5cd87b06d39e-operator-scripts\") pod \"nova-api-2855-account-create-update-r99n9\" (UID: \"881f97b5-ecc4-4032-97f0-5cd87b06d39e\") " pod="openstack/nova-api-2855-account-create-update-r99n9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.032055 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxrqj\" (UniqueName: \"kubernetes.io/projected/fa2e02e2-0ab2-4f24-8bae-e8613132a219-kube-api-access-vxrqj\") pod \"nova-api-db-create-gbj9w\" (UID: \"fa2e02e2-0ab2-4f24-8bae-e8613132a219\") " pod="openstack/nova-api-db-create-gbj9w" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.032138 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pwdc\" (UniqueName: \"kubernetes.io/projected/881f97b5-ecc4-4032-97f0-5cd87b06d39e-kube-api-access-5pwdc\") pod \"nova-api-2855-account-create-update-r99n9\" (UID: \"881f97b5-ecc4-4032-97f0-5cd87b06d39e\") " pod="openstack/nova-api-2855-account-create-update-r99n9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.034128 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa2e02e2-0ab2-4f24-8bae-e8613132a219-operator-scripts\") pod \"nova-api-db-create-gbj9w\" (UID: \"fa2e02e2-0ab2-4f24-8bae-e8613132a219\") " pod="openstack/nova-api-db-create-gbj9w" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.054510 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-7lsx9"] Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.055864 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7lsx9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.088731 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxrqj\" (UniqueName: \"kubernetes.io/projected/fa2e02e2-0ab2-4f24-8bae-e8613132a219-kube-api-access-vxrqj\") pod \"nova-api-db-create-gbj9w\" (UID: \"fa2e02e2-0ab2-4f24-8bae-e8613132a219\") " pod="openstack/nova-api-db-create-gbj9w" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.092960 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7lsx9"] Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.096268 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.140136 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pwdc\" (UniqueName: \"kubernetes.io/projected/881f97b5-ecc4-4032-97f0-5cd87b06d39e-kube-api-access-5pwdc\") pod \"nova-api-2855-account-create-update-r99n9\" (UID: \"881f97b5-ecc4-4032-97f0-5cd87b06d39e\") " pod="openstack/nova-api-2855-account-create-update-r99n9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.140212 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343f74dd-7004-4231-aa8a-381eda1790b5-operator-scripts\") pod \"nova-cell1-db-create-7lsx9\" (UID: \"343f74dd-7004-4231-aa8a-381eda1790b5\") " pod="openstack/nova-cell1-db-create-7lsx9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.140258 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d74122-c09d-42b3-9cd9-cf4a4a0b16cb-operator-scripts\") pod \"nova-cell0-db-create-f7hz9\" (UID: \"63d74122-c09d-42b3-9cd9-cf4a4a0b16cb\") " pod="openstack/nova-cell0-db-create-f7hz9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.140314 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlkg\" (UniqueName: \"kubernetes.io/projected/63d74122-c09d-42b3-9cd9-cf4a4a0b16cb-kube-api-access-nqlkg\") pod \"nova-cell0-db-create-f7hz9\" (UID: \"63d74122-c09d-42b3-9cd9-cf4a4a0b16cb\") " pod="openstack/nova-cell0-db-create-f7hz9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.140357 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881f97b5-ecc4-4032-97f0-5cd87b06d39e-operator-scripts\") pod \"nova-api-2855-account-create-update-r99n9\" (UID: \"881f97b5-ecc4-4032-97f0-5cd87b06d39e\") " pod="openstack/nova-api-2855-account-create-update-r99n9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.140422 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k48kr\" (UniqueName: \"kubernetes.io/projected/343f74dd-7004-4231-aa8a-381eda1790b5-kube-api-access-k48kr\") pod \"nova-cell1-db-create-7lsx9\" (UID: \"343f74dd-7004-4231-aa8a-381eda1790b5\") " pod="openstack/nova-cell1-db-create-7lsx9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.144981 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881f97b5-ecc4-4032-97f0-5cd87b06d39e-operator-scripts\") pod \"nova-api-2855-account-create-update-r99n9\" (UID: \"881f97b5-ecc4-4032-97f0-5cd87b06d39e\") " pod="openstack/nova-api-2855-account-create-update-r99n9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.203699 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pwdc\" (UniqueName: \"kubernetes.io/projected/881f97b5-ecc4-4032-97f0-5cd87b06d39e-kube-api-access-5pwdc\") pod \"nova-api-2855-account-create-update-r99n9\" (UID: \"881f97b5-ecc4-4032-97f0-5cd87b06d39e\") " pod="openstack/nova-api-2855-account-create-update-r99n9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.210779 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4e32-account-create-update-zj54l"] Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.212146 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e32-account-create-update-zj54l" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.216624 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.222181 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2855-account-create-update-r99n9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.236310 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4e32-account-create-update-zj54l"] Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.242705 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqlkg\" (UniqueName: \"kubernetes.io/projected/63d74122-c09d-42b3-9cd9-cf4a4a0b16cb-kube-api-access-nqlkg\") pod \"nova-cell0-db-create-f7hz9\" (UID: \"63d74122-c09d-42b3-9cd9-cf4a4a0b16cb\") " pod="openstack/nova-cell0-db-create-f7hz9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.242785 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2w6g\" (UniqueName: \"kubernetes.io/projected/db36174d-c724-45b7-a3a0-528fb5539864-kube-api-access-s2w6g\") pod \"nova-cell0-4e32-account-create-update-zj54l\" (UID: \"db36174d-c724-45b7-a3a0-528fb5539864\") " pod="openstack/nova-cell0-4e32-account-create-update-zj54l" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.242944 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k48kr\" (UniqueName: \"kubernetes.io/projected/343f74dd-7004-4231-aa8a-381eda1790b5-kube-api-access-k48kr\") pod \"nova-cell1-db-create-7lsx9\" (UID: \"343f74dd-7004-4231-aa8a-381eda1790b5\") " pod="openstack/nova-cell1-db-create-7lsx9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.243026 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db36174d-c724-45b7-a3a0-528fb5539864-operator-scripts\") pod \"nova-cell0-4e32-account-create-update-zj54l\" (UID: \"db36174d-c724-45b7-a3a0-528fb5539864\") " pod="openstack/nova-cell0-4e32-account-create-update-zj54l" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.243163 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343f74dd-7004-4231-aa8a-381eda1790b5-operator-scripts\") pod \"nova-cell1-db-create-7lsx9\" (UID: \"343f74dd-7004-4231-aa8a-381eda1790b5\") " pod="openstack/nova-cell1-db-create-7lsx9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.243236 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d74122-c09d-42b3-9cd9-cf4a4a0b16cb-operator-scripts\") pod \"nova-cell0-db-create-f7hz9\" (UID: \"63d74122-c09d-42b3-9cd9-cf4a4a0b16cb\") " pod="openstack/nova-cell0-db-create-f7hz9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.244167 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343f74dd-7004-4231-aa8a-381eda1790b5-operator-scripts\") pod \"nova-cell1-db-create-7lsx9\" (UID: \"343f74dd-7004-4231-aa8a-381eda1790b5\") " pod="openstack/nova-cell1-db-create-7lsx9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.244174 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d74122-c09d-42b3-9cd9-cf4a4a0b16cb-operator-scripts\") pod \"nova-cell0-db-create-f7hz9\" (UID: \"63d74122-c09d-42b3-9cd9-cf4a4a0b16cb\") " pod="openstack/nova-cell0-db-create-f7hz9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.266560 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqlkg\" (UniqueName: \"kubernetes.io/projected/63d74122-c09d-42b3-9cd9-cf4a4a0b16cb-kube-api-access-nqlkg\") pod \"nova-cell0-db-create-f7hz9\" (UID: \"63d74122-c09d-42b3-9cd9-cf4a4a0b16cb\") " pod="openstack/nova-cell0-db-create-f7hz9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.267471 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k48kr\" (UniqueName: \"kubernetes.io/projected/343f74dd-7004-4231-aa8a-381eda1790b5-kube-api-access-k48kr\") pod \"nova-cell1-db-create-7lsx9\" (UID: \"343f74dd-7004-4231-aa8a-381eda1790b5\") " pod="openstack/nova-cell1-db-create-7lsx9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.268179 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08a6aa9f-aab5-4565-8db9-aacd187eee44","Type":"ContainerStarted","Data":"3bc454ce41440074bdee7313614f0ce6c860bb9325f33ba46142bd838dea3f9f"} Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.294014 4797 generic.go:334] "Generic (PLEG): container finished" podID="7692da44-8fff-4c27-8069-4278620f1d55" containerID="068392753fe07fe9efc2d75cfdef54388401fa6ff9d164599a854dbaeb33fb60" exitCode=0 Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.294804 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7692da44-8fff-4c27-8069-4278620f1d55","Type":"ContainerDied","Data":"068392753fe07fe9efc2d75cfdef54388401fa6ff9d164599a854dbaeb33fb60"} Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.294908 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7692da44-8fff-4c27-8069-4278620f1d55","Type":"ContainerDied","Data":"c3f768c943addb60d05706396f621639bc97cf8f865ea12f6dcce1716a67e8af"} Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.294935 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3f768c943addb60d05706396f621639bc97cf8f865ea12f6dcce1716a67e8af" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.306011 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f7hz9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.345462 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2w6g\" (UniqueName: \"kubernetes.io/projected/db36174d-c724-45b7-a3a0-528fb5539864-kube-api-access-s2w6g\") pod \"nova-cell0-4e32-account-create-update-zj54l\" (UID: \"db36174d-c724-45b7-a3a0-528fb5539864\") " pod="openstack/nova-cell0-4e32-account-create-update-zj54l" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.345611 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db36174d-c724-45b7-a3a0-528fb5539864-operator-scripts\") pod \"nova-cell0-4e32-account-create-update-zj54l\" (UID: \"db36174d-c724-45b7-a3a0-528fb5539864\") " pod="openstack/nova-cell0-4e32-account-create-update-zj54l" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.348044 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db36174d-c724-45b7-a3a0-528fb5539864-operator-scripts\") pod \"nova-cell0-4e32-account-create-update-zj54l\" (UID: \"db36174d-c724-45b7-a3a0-528fb5539864\") " pod="openstack/nova-cell0-4e32-account-create-update-zj54l" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.360703 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gbj9w" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.368001 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2w6g\" (UniqueName: \"kubernetes.io/projected/db36174d-c724-45b7-a3a0-528fb5539864-kube-api-access-s2w6g\") pod \"nova-cell0-4e32-account-create-update-zj54l\" (UID: \"db36174d-c724-45b7-a3a0-528fb5539864\") " pod="openstack/nova-cell0-4e32-account-create-update-zj54l" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.381023 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7lsx9" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.427393 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e32-account-create-update-zj54l" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.434837 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.442434 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-355b-account-create-update-c2vkr"] Feb 16 11:26:56 crc kubenswrapper[4797]: E0216 11:26:56.447218 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7692da44-8fff-4c27-8069-4278620f1d55" containerName="glance-log" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.447233 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-public-tls-certs\") pod \"7692da44-8fff-4c27-8069-4278620f1d55\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.447313 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-config-data\") pod \"7692da44-8fff-4c27-8069-4278620f1d55\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.447340 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-combined-ca-bundle\") pod \"7692da44-8fff-4c27-8069-4278620f1d55\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.450973 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7692da44-8fff-4c27-8069-4278620f1d55-httpd-run\") pod \"7692da44-8fff-4c27-8069-4278620f1d55\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.451050 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-scripts\") pod \"7692da44-8fff-4c27-8069-4278620f1d55\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.451122 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h89vr\" (UniqueName: \"kubernetes.io/projected/7692da44-8fff-4c27-8069-4278620f1d55-kube-api-access-h89vr\") pod \"7692da44-8fff-4c27-8069-4278620f1d55\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.451327 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"7692da44-8fff-4c27-8069-4278620f1d55\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.451413 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7692da44-8fff-4c27-8069-4278620f1d55-logs\") pod \"7692da44-8fff-4c27-8069-4278620f1d55\" (UID: \"7692da44-8fff-4c27-8069-4278620f1d55\") " Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.453587 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7692da44-8fff-4c27-8069-4278620f1d55-logs" (OuterVolumeSpecName: "logs") pod "7692da44-8fff-4c27-8069-4278620f1d55" (UID: "7692da44-8fff-4c27-8069-4278620f1d55"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.447246 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="7692da44-8fff-4c27-8069-4278620f1d55" containerName="glance-log" Feb 16 11:26:56 crc kubenswrapper[4797]: E0216 11:26:56.454053 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7692da44-8fff-4c27-8069-4278620f1d55" containerName="glance-httpd" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.454070 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="7692da44-8fff-4c27-8069-4278620f1d55" containerName="glance-httpd" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.454434 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="7692da44-8fff-4c27-8069-4278620f1d55" containerName="glance-httpd" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.454452 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="7692da44-8fff-4c27-8069-4278620f1d55" containerName="glance-log" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.455184 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-355b-account-create-update-c2vkr" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.461027 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.467306 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7692da44-8fff-4c27-8069-4278620f1d55-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7692da44-8fff-4c27-8069-4278620f1d55" (UID: "7692da44-8fff-4c27-8069-4278620f1d55"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.467467 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7692da44-8fff-4c27-8069-4278620f1d55-kube-api-access-h89vr" (OuterVolumeSpecName: "kube-api-access-h89vr") pod "7692da44-8fff-4c27-8069-4278620f1d55" (UID: "7692da44-8fff-4c27-8069-4278620f1d55"). InnerVolumeSpecName "kube-api-access-h89vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.529094 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7692da44-8fff-4c27-8069-4278620f1d55" (UID: "7692da44-8fff-4c27-8069-4278620f1d55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.584642 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7692da44-8fff-4c27-8069-4278620f1d55-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.585155 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.585185 4797 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7692da44-8fff-4c27-8069-4278620f1d55-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.585200 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h89vr\" (UniqueName: \"kubernetes.io/projected/7692da44-8fff-4c27-8069-4278620f1d55-kube-api-access-h89vr\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.596470 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-355b-account-create-update-c2vkr"] Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.628473 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7692da44-8fff-4c27-8069-4278620f1d55" (UID: "7692da44-8fff-4c27-8069-4278620f1d55"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.629205 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4" (OuterVolumeSpecName: "glance") pod "7692da44-8fff-4c27-8069-4278620f1d55" (UID: "7692da44-8fff-4c27-8069-4278620f1d55"). InnerVolumeSpecName "pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.633496 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-scripts" (OuterVolumeSpecName: "scripts") pod "7692da44-8fff-4c27-8069-4278620f1d55" (UID: "7692da44-8fff-4c27-8069-4278620f1d55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.683871 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-config-data" (OuterVolumeSpecName: "config-data") pod "7692da44-8fff-4c27-8069-4278620f1d55" (UID: "7692da44-8fff-4c27-8069-4278620f1d55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.694960 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wmhp\" (UniqueName: \"kubernetes.io/projected/557afa65-539f-4a13-9817-34714c8dd21d-kube-api-access-5wmhp\") pod \"nova-cell1-355b-account-create-update-c2vkr\" (UID: \"557afa65-539f-4a13-9817-34714c8dd21d\") " pod="openstack/nova-cell1-355b-account-create-update-c2vkr" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.695122 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557afa65-539f-4a13-9817-34714c8dd21d-operator-scripts\") pod \"nova-cell1-355b-account-create-update-c2vkr\" (UID: \"557afa65-539f-4a13-9817-34714c8dd21d\") " pod="openstack/nova-cell1-355b-account-create-update-c2vkr" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.695824 4797 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") on node \"crc\" " Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.695847 4797 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.695861 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.695871 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7692da44-8fff-4c27-8069-4278620f1d55-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.753847 4797 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.754144 4797 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4") on node "crc" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.804918 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wmhp\" (UniqueName: \"kubernetes.io/projected/557afa65-539f-4a13-9817-34714c8dd21d-kube-api-access-5wmhp\") pod \"nova-cell1-355b-account-create-update-c2vkr\" (UID: \"557afa65-539f-4a13-9817-34714c8dd21d\") " pod="openstack/nova-cell1-355b-account-create-update-c2vkr" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.805040 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557afa65-539f-4a13-9817-34714c8dd21d-operator-scripts\") pod \"nova-cell1-355b-account-create-update-c2vkr\" (UID: \"557afa65-539f-4a13-9817-34714c8dd21d\") " pod="openstack/nova-cell1-355b-account-create-update-c2vkr" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.805196 4797 reconciler_common.go:293] "Volume detached for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.805934 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557afa65-539f-4a13-9817-34714c8dd21d-operator-scripts\") pod \"nova-cell1-355b-account-create-update-c2vkr\" (UID: \"557afa65-539f-4a13-9817-34714c8dd21d\") " pod="openstack/nova-cell1-355b-account-create-update-c2vkr" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.835153 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wmhp\" (UniqueName: \"kubernetes.io/projected/557afa65-539f-4a13-9817-34714c8dd21d-kube-api-access-5wmhp\") pod \"nova-cell1-355b-account-create-update-c2vkr\" (UID: \"557afa65-539f-4a13-9817-34714c8dd21d\") " pod="openstack/nova-cell1-355b-account-create-update-c2vkr" Feb 16 11:26:56 crc kubenswrapper[4797]: I0216 11:26:56.921229 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2855-account-create-update-r99n9"] Feb 16 11:26:57 crc kubenswrapper[4797]: E0216 11:26:57.106486 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:26:57 crc kubenswrapper[4797]: E0216 11:26:57.107016 4797 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:26:57 crc kubenswrapper[4797]: E0216 11:26:57.107182 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fvxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dhgrw_openstack(895bed8d-c376-47ad-8fa6-3cf0f07399c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.110464 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-355b-account-create-update-c2vkr" Feb 16 11:26:57 crc kubenswrapper[4797]: E0216 11:26:57.109060 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.325529 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08a6aa9f-aab5-4565-8db9-aacd187eee44","Type":"ContainerStarted","Data":"1669a540ac7cb8e01e932ff25520755fd867c89936b797d1fc262e48094d0f95"} Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.334931 4797 generic.go:334] "Generic (PLEG): container finished" podID="e45357a0-d18d-4114-8598-5fa948443f32" containerID="2eb882a209ba222ac4a28467a60e6142ecb4307f204e350317ebd67b229d7496" exitCode=0 Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.335000 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e45357a0-d18d-4114-8598-5fa948443f32","Type":"ContainerDied","Data":"2eb882a209ba222ac4a28467a60e6142ecb4307f204e350317ebd67b229d7496"} Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.340108 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.341107 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2855-account-create-update-r99n9" event={"ID":"881f97b5-ecc4-4032-97f0-5cd87b06d39e","Type":"ContainerStarted","Data":"1b7ee416101efa00dda5d3b6c8121977fb41d0e8b2fef83cceca88c40fd0436e"} Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.341134 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2855-account-create-update-r99n9" event={"ID":"881f97b5-ecc4-4032-97f0-5cd87b06d39e","Type":"ContainerStarted","Data":"6e12946040f0cf58adf63bc2ea89a27857c50185a44e3c51a47bc62b1516e23a"} Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.381603 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2855-account-create-update-r99n9" podStartSLOduration=2.380777679 podStartE2EDuration="2.380777679s" podCreationTimestamp="2026-02-16 11:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:57.358386486 +0000 UTC m=+1212.078571466" watchObservedRunningTime="2026-02-16 11:26:57.380777679 +0000 UTC m=+1212.100962679" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.476639 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gbj9w"] Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.496554 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.529025 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-f7hz9"] Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.538477 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.545279 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.547211 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.551847 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.551874 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.580150 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.601855 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4e32-account-create-update-zj54l"] Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.754830 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e651df34-b345-442a-aa48-2f3a52a8df2b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.755443 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.755497 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e651df34-b345-442a-aa48-2f3a52a8df2b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.755519 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e651df34-b345-442a-aa48-2f3a52a8df2b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.755594 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kp49\" (UniqueName: \"kubernetes.io/projected/e651df34-b345-442a-aa48-2f3a52a8df2b-kube-api-access-2kp49\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.755717 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e651df34-b345-442a-aa48-2f3a52a8df2b-config-data\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.755792 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e651df34-b345-442a-aa48-2f3a52a8df2b-logs\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.755823 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e651df34-b345-442a-aa48-2f3a52a8df2b-scripts\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.831016 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.866827 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e651df34-b345-442a-aa48-2f3a52a8df2b-logs\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.866875 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e651df34-b345-442a-aa48-2f3a52a8df2b-scripts\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.866928 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e651df34-b345-442a-aa48-2f3a52a8df2b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.866955 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.866989 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e651df34-b345-442a-aa48-2f3a52a8df2b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.867005 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e651df34-b345-442a-aa48-2f3a52a8df2b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.867038 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kp49\" (UniqueName: \"kubernetes.io/projected/e651df34-b345-442a-aa48-2f3a52a8df2b-kube-api-access-2kp49\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.867120 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e651df34-b345-442a-aa48-2f3a52a8df2b-config-data\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.868593 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e651df34-b345-442a-aa48-2f3a52a8df2b-logs\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.874348 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e651df34-b345-442a-aa48-2f3a52a8df2b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.877213 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e651df34-b345-442a-aa48-2f3a52a8df2b-config-data\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.892704 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.892764 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f8da30fb4c83d9deb7f001a58f922a696263527e837af7c4c51b5beb3f892969/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.896733 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e651df34-b345-442a-aa48-2f3a52a8df2b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.928121 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kp49\" (UniqueName: \"kubernetes.io/projected/e651df34-b345-442a-aa48-2f3a52a8df2b-kube-api-access-2kp49\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.930475 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e651df34-b345-442a-aa48-2f3a52a8df2b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.931404 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e651df34-b345-442a-aa48-2f3a52a8df2b-scripts\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.969330 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ggxx\" (UniqueName: \"kubernetes.io/projected/e45357a0-d18d-4114-8598-5fa948443f32-kube-api-access-5ggxx\") pod \"e45357a0-d18d-4114-8598-5fa948443f32\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.969406 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-config-data\") pod \"e45357a0-d18d-4114-8598-5fa948443f32\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.969542 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e45357a0-d18d-4114-8598-5fa948443f32-httpd-run\") pod \"e45357a0-d18d-4114-8598-5fa948443f32\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.969620 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-scripts\") pod \"e45357a0-d18d-4114-8598-5fa948443f32\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.969672 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-internal-tls-certs\") pod \"e45357a0-d18d-4114-8598-5fa948443f32\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.969728 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e45357a0-d18d-4114-8598-5fa948443f32-logs\") pod \"e45357a0-d18d-4114-8598-5fa948443f32\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.969900 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-combined-ca-bundle\") pod \"e45357a0-d18d-4114-8598-5fa948443f32\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.970046 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"e45357a0-d18d-4114-8598-5fa948443f32\" (UID: \"e45357a0-d18d-4114-8598-5fa948443f32\") " Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.973243 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e45357a0-d18d-4114-8598-5fa948443f32-logs" (OuterVolumeSpecName: "logs") pod "e45357a0-d18d-4114-8598-5fa948443f32" (UID: "e45357a0-d18d-4114-8598-5fa948443f32"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.976340 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e45357a0-d18d-4114-8598-5fa948443f32-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e45357a0-d18d-4114-8598-5fa948443f32" (UID: "e45357a0-d18d-4114-8598-5fa948443f32"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.976541 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e45357a0-d18d-4114-8598-5fa948443f32-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:57 crc kubenswrapper[4797]: I0216 11:26:57.993484 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e45357a0-d18d-4114-8598-5fa948443f32-kube-api-access-5ggxx" (OuterVolumeSpecName: "kube-api-access-5ggxx") pod "e45357a0-d18d-4114-8598-5fa948443f32" (UID: "e45357a0-d18d-4114-8598-5fa948443f32"). InnerVolumeSpecName "kube-api-access-5ggxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.000937 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-scripts" (OuterVolumeSpecName: "scripts") pod "e45357a0-d18d-4114-8598-5fa948443f32" (UID: "e45357a0-d18d-4114-8598-5fa948443f32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.021393 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f6caeb13-2d9a-4fda-992e-356359ebb2f4\") pod \"glance-default-external-api-0\" (UID: \"e651df34-b345-442a-aa48-2f3a52a8df2b\") " pod="openstack/glance-default-external-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.061598 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e45357a0-d18d-4114-8598-5fa948443f32" (UID: "e45357a0-d18d-4114-8598-5fa948443f32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.079179 4797 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e45357a0-d18d-4114-8598-5fa948443f32-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.079207 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.079221 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.079234 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ggxx\" (UniqueName: \"kubernetes.io/projected/e45357a0-d18d-4114-8598-5fa948443f32-kube-api-access-5ggxx\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.100677 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7692da44-8fff-4c27-8069-4278620f1d55" path="/var/lib/kubelet/pods/7692da44-8fff-4c27-8069-4278620f1d55/volumes" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.123100 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-7lsx9"] Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.129872 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82" (OuterVolumeSpecName: "glance") pod "e45357a0-d18d-4114-8598-5fa948443f32" (UID: "e45357a0-d18d-4114-8598-5fa948443f32"). InnerVolumeSpecName "pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.146036 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-config-data" (OuterVolumeSpecName: "config-data") pod "e45357a0-d18d-4114-8598-5fa948443f32" (UID: "e45357a0-d18d-4114-8598-5fa948443f32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.177708 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e45357a0-d18d-4114-8598-5fa948443f32" (UID: "e45357a0-d18d-4114-8598-5fa948443f32"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.181925 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.181976 4797 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e45357a0-d18d-4114-8598-5fa948443f32-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.182029 4797 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") on node \"crc\" " Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.185523 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.327520 4797 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.327860 4797 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82") on node "crc" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.347224 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-355b-account-create-update-c2vkr"] Feb 16 11:26:58 crc kubenswrapper[4797]: W0216 11:26:58.362855 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod557afa65_539f_4a13_9817_34714c8dd21d.slice/crio-7fcf6ed1ed9ce21a33122936abd722e1400f0591cbefdfd0b0e594b2fab09eef WatchSource:0}: Error finding container 7fcf6ed1ed9ce21a33122936abd722e1400f0591cbefdfd0b0e594b2fab09eef: Status 404 returned error can't find the container with id 7fcf6ed1ed9ce21a33122936abd722e1400f0591cbefdfd0b0e594b2fab09eef Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.370343 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e45357a0-d18d-4114-8598-5fa948443f32","Type":"ContainerDied","Data":"5b60fff64e28041ae5aa0bd5d3323dda29282390c638addd99baa6992dea0137"} Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.370795 4797 scope.go:117] "RemoveContainer" containerID="2eb882a209ba222ac4a28467a60e6142ecb4307f204e350317ebd67b229d7496" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.370612 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.396771 4797 generic.go:334] "Generic (PLEG): container finished" podID="881f97b5-ecc4-4032-97f0-5cd87b06d39e" containerID="1b7ee416101efa00dda5d3b6c8121977fb41d0e8b2fef83cceca88c40fd0436e" exitCode=0 Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.396915 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2855-account-create-update-r99n9" event={"ID":"881f97b5-ecc4-4032-97f0-5cd87b06d39e","Type":"ContainerDied","Data":"1b7ee416101efa00dda5d3b6c8121977fb41d0e8b2fef83cceca88c40fd0436e"} Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.401134 4797 reconciler_common.go:293] "Volume detached for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") on node \"crc\" DevicePath \"\"" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.438335 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7lsx9" event={"ID":"343f74dd-7004-4231-aa8a-381eda1790b5","Type":"ContainerStarted","Data":"dfd6fe43a7ec9c6983e50c0aba56de21eb1fab59f6fba4a85330655454bb64e0"} Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.487988 4797 generic.go:334] "Generic (PLEG): container finished" podID="fa2e02e2-0ab2-4f24-8bae-e8613132a219" containerID="4bec00d59423bae312555e2b3bf85f34ca49fe55c6012ab5e98e18774b8c943a" exitCode=0 Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.488088 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gbj9w" event={"ID":"fa2e02e2-0ab2-4f24-8bae-e8613132a219","Type":"ContainerDied","Data":"4bec00d59423bae312555e2b3bf85f34ca49fe55c6012ab5e98e18774b8c943a"} Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.488122 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gbj9w" event={"ID":"fa2e02e2-0ab2-4f24-8bae-e8613132a219","Type":"ContainerStarted","Data":"a481b813ecbe6002dbffe39a896d9902a413f93631f26c5b70446ed3b1239979"} Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.506210 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-f7hz9" event={"ID":"63d74122-c09d-42b3-9cd9-cf4a4a0b16cb","Type":"ContainerStarted","Data":"d99a13361d6e5072a412f8e5c8cb8447c0efc7e4203b9ff7f1f6bf43071075c4"} Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.525197 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.536846 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e32-account-create-update-zj54l" event={"ID":"db36174d-c724-45b7-a3a0-528fb5539864","Type":"ContainerStarted","Data":"d7174d30e707071360b3ecfd552669901fdbdecfc6491397a01b003dbce53888"} Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.536924 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e32-account-create-update-zj54l" event={"ID":"db36174d-c724-45b7-a3a0-528fb5539864","Type":"ContainerStarted","Data":"d58c03f8ca614eeb9e891883cfb67b462aea83c07a90baadfc3bce3f78602dfd"} Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.542566 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.549972 4797 scope.go:117] "RemoveContainer" containerID="51db9dc1b0d260bf1e92f8a916482c8455108baa9c2c5abd2ff62f98e5830b87" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.570950 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:26:58 crc kubenswrapper[4797]: E0216 11:26:58.571654 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45357a0-d18d-4114-8598-5fa948443f32" containerName="glance-httpd" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.571677 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45357a0-d18d-4114-8598-5fa948443f32" containerName="glance-httpd" Feb 16 11:26:58 crc kubenswrapper[4797]: E0216 11:26:58.571704 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e45357a0-d18d-4114-8598-5fa948443f32" containerName="glance-log" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.571716 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="e45357a0-d18d-4114-8598-5fa948443f32" containerName="glance-log" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.571982 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45357a0-d18d-4114-8598-5fa948443f32" containerName="glance-httpd" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.572002 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="e45357a0-d18d-4114-8598-5fa948443f32" containerName="glance-log" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.577397 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.584033 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.584177 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.591355 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.594898 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-f7hz9" podStartSLOduration=3.594868012 podStartE2EDuration="3.594868012s" podCreationTimestamp="2026-02-16 11:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:58.550537859 +0000 UTC m=+1213.270722839" watchObservedRunningTime="2026-02-16 11:26:58.594868012 +0000 UTC m=+1213.315052992" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.665175 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-4e32-account-create-update-zj54l" podStartSLOduration=2.665154463 podStartE2EDuration="2.665154463s" podCreationTimestamp="2026-02-16 11:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:26:58.583610264 +0000 UTC m=+1213.303795244" watchObservedRunningTime="2026-02-16 11:26:58.665154463 +0000 UTC m=+1213.385339443" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.732672 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d308d940-ff49-426f-abfb-50203189d565-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.732813 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d308d940-ff49-426f-abfb-50203189d565-logs\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.732847 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d308d940-ff49-426f-abfb-50203189d565-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.732871 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrgfp\" (UniqueName: \"kubernetes.io/projected/d308d940-ff49-426f-abfb-50203189d565-kube-api-access-vrgfp\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.732891 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d308d940-ff49-426f-abfb-50203189d565-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.732932 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d308d940-ff49-426f-abfb-50203189d565-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.733033 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d308d940-ff49-426f-abfb-50203189d565-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.733095 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.835050 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d308d940-ff49-426f-abfb-50203189d565-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.836984 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d308d940-ff49-426f-abfb-50203189d565-logs\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.837066 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d308d940-ff49-426f-abfb-50203189d565-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.837099 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrgfp\" (UniqueName: \"kubernetes.io/projected/d308d940-ff49-426f-abfb-50203189d565-kube-api-access-vrgfp\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.837127 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d308d940-ff49-426f-abfb-50203189d565-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.837180 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d308d940-ff49-426f-abfb-50203189d565-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.837365 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d308d940-ff49-426f-abfb-50203189d565-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.837453 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.838192 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d308d940-ff49-426f-abfb-50203189d565-logs\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.839099 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d308d940-ff49-426f-abfb-50203189d565-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.843704 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d308d940-ff49-426f-abfb-50203189d565-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.843752 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d308d940-ff49-426f-abfb-50203189d565-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.844729 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d308d940-ff49-426f-abfb-50203189d565-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.849100 4797 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.849164 4797 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/27e379195fe32f84d1c9f17b5c57278f71c5a261b9f037c02b6f4c2041aa5cbc/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.859206 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d308d940-ff49-426f-abfb-50203189d565-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.862459 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrgfp\" (UniqueName: \"kubernetes.io/projected/d308d940-ff49-426f-abfb-50203189d565-kube-api-access-vrgfp\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:58 crc kubenswrapper[4797]: I0216 11:26:58.931611 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c76c5bc-7dce-4372-9f4e-3e3db9b3ce82\") pod \"glance-default-internal-api-0\" (UID: \"d308d940-ff49-426f-abfb-50203189d565\") " pod="openstack/glance-default-internal-api-0" Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.014180 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 11:26:59 crc kubenswrapper[4797]: W0216 11:26:59.038896 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode651df34_b345_442a_aa48_2f3a52a8df2b.slice/crio-01e4d45e59764384ead603f223746af7ec171a2dbdf5120b72985fbc78b4d70a WatchSource:0}: Error finding container 01e4d45e59764384ead603f223746af7ec171a2dbdf5120b72985fbc78b4d70a: Status 404 returned error can't find the container with id 01e4d45e59764384ead603f223746af7ec171a2dbdf5120b72985fbc78b4d70a Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.041972 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.573936 4797 generic.go:334] "Generic (PLEG): container finished" podID="63d74122-c09d-42b3-9cd9-cf4a4a0b16cb" containerID="d3088d8ca7f9e63d10733404e9a3bb8a8fc52a61ddb265b139346ddff28504f6" exitCode=0 Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.574105 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-f7hz9" event={"ID":"63d74122-c09d-42b3-9cd9-cf4a4a0b16cb","Type":"ContainerDied","Data":"d3088d8ca7f9e63d10733404e9a3bb8a8fc52a61ddb265b139346ddff28504f6"} Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.582145 4797 generic.go:334] "Generic (PLEG): container finished" podID="343f74dd-7004-4231-aa8a-381eda1790b5" containerID="f9c72f7c72989ba1042bc78ffd959886c6a5b24ba4907bdaef12fd75c7cf2d15" exitCode=0 Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.582234 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7lsx9" event={"ID":"343f74dd-7004-4231-aa8a-381eda1790b5","Type":"ContainerDied","Data":"f9c72f7c72989ba1042bc78ffd959886c6a5b24ba4907bdaef12fd75c7cf2d15"} Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.586174 4797 generic.go:334] "Generic (PLEG): container finished" podID="557afa65-539f-4a13-9817-34714c8dd21d" containerID="6c48e9dc3fac4bb4dd211d6331b5b437ef64a6f2faf0d1bc7878354a62b8cad5" exitCode=0 Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.586245 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-355b-account-create-update-c2vkr" event={"ID":"557afa65-539f-4a13-9817-34714c8dd21d","Type":"ContainerDied","Data":"6c48e9dc3fac4bb4dd211d6331b5b437ef64a6f2faf0d1bc7878354a62b8cad5"} Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.586272 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-355b-account-create-update-c2vkr" event={"ID":"557afa65-539f-4a13-9817-34714c8dd21d","Type":"ContainerStarted","Data":"7fcf6ed1ed9ce21a33122936abd722e1400f0591cbefdfd0b0e594b2fab09eef"} Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.593546 4797 generic.go:334] "Generic (PLEG): container finished" podID="db36174d-c724-45b7-a3a0-528fb5539864" containerID="d7174d30e707071360b3ecfd552669901fdbdecfc6491397a01b003dbce53888" exitCode=0 Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.593631 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e32-account-create-update-zj54l" event={"ID":"db36174d-c724-45b7-a3a0-528fb5539864","Type":"ContainerDied","Data":"d7174d30e707071360b3ecfd552669901fdbdecfc6491397a01b003dbce53888"} Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.602070 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e651df34-b345-442a-aa48-2f3a52a8df2b","Type":"ContainerStarted","Data":"01e4d45e59764384ead603f223746af7ec171a2dbdf5120b72985fbc78b4d70a"} Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.626381 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08a6aa9f-aab5-4565-8db9-aacd187eee44","Type":"ContainerStarted","Data":"4ad0ae024218aca23e9436c47c6d5cd43c872becf99c20becf7bd06444689ea7"} Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.661780 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6fd696486f-x6hfl" Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.682739 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.762294 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7dc766fb7b-kdzz7"] Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.762549 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7dc766fb7b-kdzz7" podUID="a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" containerName="neutron-api" containerID="cri-o://c132e0058d25ce2d261cf11bfeef444d21c0c5ecffbbc497a6d132e050a04673" gracePeriod=30 Feb 16 11:26:59 crc kubenswrapper[4797]: I0216 11:26:59.763039 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7dc766fb7b-kdzz7" podUID="a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" containerName="neutron-httpd" containerID="cri-o://ca36d862979bc5ce0af35447b3630046aac7e584dfc56a5a3af7c9b2cea22fcc" gracePeriod=30 Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.010105 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e45357a0-d18d-4114-8598-5fa948443f32" path="/var/lib/kubelet/pods/e45357a0-d18d-4114-8598-5fa948443f32/volumes" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.444436 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2855-account-create-update-r99n9" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.463296 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gbj9w" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.589641 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881f97b5-ecc4-4032-97f0-5cd87b06d39e-operator-scripts\") pod \"881f97b5-ecc4-4032-97f0-5cd87b06d39e\" (UID: \"881f97b5-ecc4-4032-97f0-5cd87b06d39e\") " Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.590129 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pwdc\" (UniqueName: \"kubernetes.io/projected/881f97b5-ecc4-4032-97f0-5cd87b06d39e-kube-api-access-5pwdc\") pod \"881f97b5-ecc4-4032-97f0-5cd87b06d39e\" (UID: \"881f97b5-ecc4-4032-97f0-5cd87b06d39e\") " Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.590443 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa2e02e2-0ab2-4f24-8bae-e8613132a219-operator-scripts\") pod \"fa2e02e2-0ab2-4f24-8bae-e8613132a219\" (UID: \"fa2e02e2-0ab2-4f24-8bae-e8613132a219\") " Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.590638 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxrqj\" (UniqueName: \"kubernetes.io/projected/fa2e02e2-0ab2-4f24-8bae-e8613132a219-kube-api-access-vxrqj\") pod \"fa2e02e2-0ab2-4f24-8bae-e8613132a219\" (UID: \"fa2e02e2-0ab2-4f24-8bae-e8613132a219\") " Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.591399 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa2e02e2-0ab2-4f24-8bae-e8613132a219-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa2e02e2-0ab2-4f24-8bae-e8613132a219" (UID: "fa2e02e2-0ab2-4f24-8bae-e8613132a219"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.591675 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/881f97b5-ecc4-4032-97f0-5cd87b06d39e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "881f97b5-ecc4-4032-97f0-5cd87b06d39e" (UID: "881f97b5-ecc4-4032-97f0-5cd87b06d39e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.592134 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/881f97b5-ecc4-4032-97f0-5cd87b06d39e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.592216 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa2e02e2-0ab2-4f24-8bae-e8613132a219-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.596176 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa2e02e2-0ab2-4f24-8bae-e8613132a219-kube-api-access-vxrqj" (OuterVolumeSpecName: "kube-api-access-vxrqj") pod "fa2e02e2-0ab2-4f24-8bae-e8613132a219" (UID: "fa2e02e2-0ab2-4f24-8bae-e8613132a219"). InnerVolumeSpecName "kube-api-access-vxrqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.597260 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881f97b5-ecc4-4032-97f0-5cd87b06d39e-kube-api-access-5pwdc" (OuterVolumeSpecName: "kube-api-access-5pwdc") pod "881f97b5-ecc4-4032-97f0-5cd87b06d39e" (UID: "881f97b5-ecc4-4032-97f0-5cd87b06d39e"). InnerVolumeSpecName "kube-api-access-5pwdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.654547 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2855-account-create-update-r99n9" event={"ID":"881f97b5-ecc4-4032-97f0-5cd87b06d39e","Type":"ContainerDied","Data":"6e12946040f0cf58adf63bc2ea89a27857c50185a44e3c51a47bc62b1516e23a"} Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.654622 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e12946040f0cf58adf63bc2ea89a27857c50185a44e3c51a47bc62b1516e23a" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.654686 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2855-account-create-update-r99n9" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.659465 4797 generic.go:334] "Generic (PLEG): container finished" podID="a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" containerID="ca36d862979bc5ce0af35447b3630046aac7e584dfc56a5a3af7c9b2cea22fcc" exitCode=0 Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.659552 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc766fb7b-kdzz7" event={"ID":"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c","Type":"ContainerDied","Data":"ca36d862979bc5ce0af35447b3630046aac7e584dfc56a5a3af7c9b2cea22fcc"} Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.667387 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08a6aa9f-aab5-4565-8db9-aacd187eee44","Type":"ContainerStarted","Data":"207ea8af427ea993a6d2ee9a891549207340e019a42c7d5ff0e1857c3e33a46d"} Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.668119 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="ceilometer-central-agent" containerID="cri-o://3bc454ce41440074bdee7313614f0ce6c860bb9325f33ba46142bd838dea3f9f" gracePeriod=30 Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.668190 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="sg-core" containerID="cri-o://4ad0ae024218aca23e9436c47c6d5cd43c872becf99c20becf7bd06444689ea7" gracePeriod=30 Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.668214 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="ceilometer-notification-agent" containerID="cri-o://1669a540ac7cb8e01e932ff25520755fd867c89936b797d1fc262e48094d0f95" gracePeriod=30 Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.668216 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="proxy-httpd" containerID="cri-o://207ea8af427ea993a6d2ee9a891549207340e019a42c7d5ff0e1857c3e33a46d" gracePeriod=30 Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.693163 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d308d940-ff49-426f-abfb-50203189d565","Type":"ContainerStarted","Data":"c8a742db781822bee569caaed44853ed03e9a5ed3682188bb88b165f0edbba11"} Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.693201 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d308d940-ff49-426f-abfb-50203189d565","Type":"ContainerStarted","Data":"d1c99375e2fef93e7372168bfd18636c2241e8801bb3bab0ce1b050cead0ec88"} Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.694349 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pwdc\" (UniqueName: \"kubernetes.io/projected/881f97b5-ecc4-4032-97f0-5cd87b06d39e-kube-api-access-5pwdc\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.694378 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxrqj\" (UniqueName: \"kubernetes.io/projected/fa2e02e2-0ab2-4f24-8bae-e8613132a219-kube-api-access-vxrqj\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.696893 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gbj9w" event={"ID":"fa2e02e2-0ab2-4f24-8bae-e8613132a219","Type":"ContainerDied","Data":"a481b813ecbe6002dbffe39a896d9902a413f93631f26c5b70446ed3b1239979"} Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.696925 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a481b813ecbe6002dbffe39a896d9902a413f93631f26c5b70446ed3b1239979" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.696982 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gbj9w" Feb 16 11:27:00 crc kubenswrapper[4797]: I0216 11:27:00.703938 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e651df34-b345-442a-aa48-2f3a52a8df2b","Type":"ContainerStarted","Data":"80e797714253de370c91f050a947fd9cd7688e70abdcf105f19cd65cc41c6acf"} Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.422029 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f7hz9" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.466392 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.091192632 podStartE2EDuration="7.466369956s" podCreationTimestamp="2026-02-16 11:26:54 +0000 UTC" firstStartedPulling="2026-02-16 11:26:55.086347621 +0000 UTC m=+1209.806532601" lastFinishedPulling="2026-02-16 11:26:59.461524945 +0000 UTC m=+1214.181709925" observedRunningTime="2026-02-16 11:27:00.699628474 +0000 UTC m=+1215.419813474" watchObservedRunningTime="2026-02-16 11:27:01.466369956 +0000 UTC m=+1216.186554936" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.519518 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqlkg\" (UniqueName: \"kubernetes.io/projected/63d74122-c09d-42b3-9cd9-cf4a4a0b16cb-kube-api-access-nqlkg\") pod \"63d74122-c09d-42b3-9cd9-cf4a4a0b16cb\" (UID: \"63d74122-c09d-42b3-9cd9-cf4a4a0b16cb\") " Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.519734 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d74122-c09d-42b3-9cd9-cf4a4a0b16cb-operator-scripts\") pod \"63d74122-c09d-42b3-9cd9-cf4a4a0b16cb\" (UID: \"63d74122-c09d-42b3-9cd9-cf4a4a0b16cb\") " Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.522877 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d74122-c09d-42b3-9cd9-cf4a4a0b16cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63d74122-c09d-42b3-9cd9-cf4a4a0b16cb" (UID: "63d74122-c09d-42b3-9cd9-cf4a4a0b16cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.540306 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d74122-c09d-42b3-9cd9-cf4a4a0b16cb-kube-api-access-nqlkg" (OuterVolumeSpecName: "kube-api-access-nqlkg") pod "63d74122-c09d-42b3-9cd9-cf4a4a0b16cb" (UID: "63d74122-c09d-42b3-9cd9-cf4a4a0b16cb"). InnerVolumeSpecName "kube-api-access-nqlkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.622194 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqlkg\" (UniqueName: \"kubernetes.io/projected/63d74122-c09d-42b3-9cd9-cf4a4a0b16cb-kube-api-access-nqlkg\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.622234 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d74122-c09d-42b3-9cd9-cf4a4a0b16cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.630706 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-355b-account-create-update-c2vkr" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.640253 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7lsx9" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.654983 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e32-account-create-update-zj54l" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.716898 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-7lsx9" event={"ID":"343f74dd-7004-4231-aa8a-381eda1790b5","Type":"ContainerDied","Data":"dfd6fe43a7ec9c6983e50c0aba56de21eb1fab59f6fba4a85330655454bb64e0"} Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.716940 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfd6fe43a7ec9c6983e50c0aba56de21eb1fab59f6fba4a85330655454bb64e0" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.717003 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-7lsx9" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.718543 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-f7hz9" event={"ID":"63d74122-c09d-42b3-9cd9-cf4a4a0b16cb","Type":"ContainerDied","Data":"d99a13361d6e5072a412f8e5c8cb8447c0efc7e4203b9ff7f1f6bf43071075c4"} Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.718568 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d99a13361d6e5072a412f8e5c8cb8447c0efc7e4203b9ff7f1f6bf43071075c4" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.718627 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-f7hz9" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.721187 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-355b-account-create-update-c2vkr" event={"ID":"557afa65-539f-4a13-9817-34714c8dd21d","Type":"ContainerDied","Data":"7fcf6ed1ed9ce21a33122936abd722e1400f0591cbefdfd0b0e594b2fab09eef"} Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.721219 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fcf6ed1ed9ce21a33122936abd722e1400f0591cbefdfd0b0e594b2fab09eef" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.721262 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-355b-account-create-update-c2vkr" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.723243 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k48kr\" (UniqueName: \"kubernetes.io/projected/343f74dd-7004-4231-aa8a-381eda1790b5-kube-api-access-k48kr\") pod \"343f74dd-7004-4231-aa8a-381eda1790b5\" (UID: \"343f74dd-7004-4231-aa8a-381eda1790b5\") " Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.723323 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wmhp\" (UniqueName: \"kubernetes.io/projected/557afa65-539f-4a13-9817-34714c8dd21d-kube-api-access-5wmhp\") pod \"557afa65-539f-4a13-9817-34714c8dd21d\" (UID: \"557afa65-539f-4a13-9817-34714c8dd21d\") " Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.723376 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557afa65-539f-4a13-9817-34714c8dd21d-operator-scripts\") pod \"557afa65-539f-4a13-9817-34714c8dd21d\" (UID: \"557afa65-539f-4a13-9817-34714c8dd21d\") " Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.723414 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343f74dd-7004-4231-aa8a-381eda1790b5-operator-scripts\") pod \"343f74dd-7004-4231-aa8a-381eda1790b5\" (UID: \"343f74dd-7004-4231-aa8a-381eda1790b5\") " Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.725447 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557afa65-539f-4a13-9817-34714c8dd21d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "557afa65-539f-4a13-9817-34714c8dd21d" (UID: "557afa65-539f-4a13-9817-34714c8dd21d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.726065 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/343f74dd-7004-4231-aa8a-381eda1790b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "343f74dd-7004-4231-aa8a-381eda1790b5" (UID: "343f74dd-7004-4231-aa8a-381eda1790b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.727197 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e651df34-b345-442a-aa48-2f3a52a8df2b","Type":"ContainerStarted","Data":"98ed353f2f05ab6317a51f737363329790520e3a10b9fe93009257fce08a7fd9"} Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.731220 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557afa65-539f-4a13-9817-34714c8dd21d-kube-api-access-5wmhp" (OuterVolumeSpecName: "kube-api-access-5wmhp") pod "557afa65-539f-4a13-9817-34714c8dd21d" (UID: "557afa65-539f-4a13-9817-34714c8dd21d"). InnerVolumeSpecName "kube-api-access-5wmhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.731590 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/343f74dd-7004-4231-aa8a-381eda1790b5-kube-api-access-k48kr" (OuterVolumeSpecName: "kube-api-access-k48kr") pod "343f74dd-7004-4231-aa8a-381eda1790b5" (UID: "343f74dd-7004-4231-aa8a-381eda1790b5"). InnerVolumeSpecName "kube-api-access-k48kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.735319 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4e32-account-create-update-zj54l" event={"ID":"db36174d-c724-45b7-a3a0-528fb5539864","Type":"ContainerDied","Data":"d58c03f8ca614eeb9e891883cfb67b462aea83c07a90baadfc3bce3f78602dfd"} Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.735361 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d58c03f8ca614eeb9e891883cfb67b462aea83c07a90baadfc3bce3f78602dfd" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.735872 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4e32-account-create-update-zj54l" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.745327 4797 generic.go:334] "Generic (PLEG): container finished" podID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerID="207ea8af427ea993a6d2ee9a891549207340e019a42c7d5ff0e1857c3e33a46d" exitCode=0 Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.745376 4797 generic.go:334] "Generic (PLEG): container finished" podID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerID="4ad0ae024218aca23e9436c47c6d5cd43c872becf99c20becf7bd06444689ea7" exitCode=2 Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.745386 4797 generic.go:334] "Generic (PLEG): container finished" podID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerID="1669a540ac7cb8e01e932ff25520755fd867c89936b797d1fc262e48094d0f95" exitCode=0 Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.745365 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08a6aa9f-aab5-4565-8db9-aacd187eee44","Type":"ContainerDied","Data":"207ea8af427ea993a6d2ee9a891549207340e019a42c7d5ff0e1857c3e33a46d"} Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.745739 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08a6aa9f-aab5-4565-8db9-aacd187eee44","Type":"ContainerDied","Data":"4ad0ae024218aca23e9436c47c6d5cd43c872becf99c20becf7bd06444689ea7"} Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.745805 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08a6aa9f-aab5-4565-8db9-aacd187eee44","Type":"ContainerDied","Data":"1669a540ac7cb8e01e932ff25520755fd867c89936b797d1fc262e48094d0f95"} Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.750655 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d308d940-ff49-426f-abfb-50203189d565","Type":"ContainerStarted","Data":"eeaf566b8359d994a8d62fa9928459d08221d755412f698b17c991e9bb2860f3"} Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.759725 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.759701495 podStartE2EDuration="4.759701495s" podCreationTimestamp="2026-02-16 11:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:27:01.753554667 +0000 UTC m=+1216.473739657" watchObservedRunningTime="2026-02-16 11:27:01.759701495 +0000 UTC m=+1216.479886495" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.797584 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.79755082 podStartE2EDuration="3.79755082s" podCreationTimestamp="2026-02-16 11:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:27:01.779991381 +0000 UTC m=+1216.500176371" watchObservedRunningTime="2026-02-16 11:27:01.79755082 +0000 UTC m=+1216.517735800" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.825353 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2w6g\" (UniqueName: \"kubernetes.io/projected/db36174d-c724-45b7-a3a0-528fb5539864-kube-api-access-s2w6g\") pod \"db36174d-c724-45b7-a3a0-528fb5539864\" (UID: \"db36174d-c724-45b7-a3a0-528fb5539864\") " Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.825721 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db36174d-c724-45b7-a3a0-528fb5539864-operator-scripts\") pod \"db36174d-c724-45b7-a3a0-528fb5539864\" (UID: \"db36174d-c724-45b7-a3a0-528fb5539864\") " Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.826162 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db36174d-c724-45b7-a3a0-528fb5539864-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "db36174d-c724-45b7-a3a0-528fb5539864" (UID: "db36174d-c724-45b7-a3a0-528fb5539864"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.827540 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k48kr\" (UniqueName: \"kubernetes.io/projected/343f74dd-7004-4231-aa8a-381eda1790b5-kube-api-access-k48kr\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.827570 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/db36174d-c724-45b7-a3a0-528fb5539864-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.827605 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wmhp\" (UniqueName: \"kubernetes.io/projected/557afa65-539f-4a13-9817-34714c8dd21d-kube-api-access-5wmhp\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.827619 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557afa65-539f-4a13-9817-34714c8dd21d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.827633 4797 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/343f74dd-7004-4231-aa8a-381eda1790b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.829439 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db36174d-c724-45b7-a3a0-528fb5539864-kube-api-access-s2w6g" (OuterVolumeSpecName: "kube-api-access-s2w6g") pod "db36174d-c724-45b7-a3a0-528fb5539864" (UID: "db36174d-c724-45b7-a3a0-528fb5539864"). InnerVolumeSpecName "kube-api-access-s2w6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:01 crc kubenswrapper[4797]: I0216 11:27:01.929682 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2w6g\" (UniqueName: \"kubernetes.io/projected/db36174d-c724-45b7-a3a0-528fb5539864-kube-api-access-s2w6g\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.383345 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.462289 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08a6aa9f-aab5-4565-8db9-aacd187eee44-log-httpd\") pod \"08a6aa9f-aab5-4565-8db9-aacd187eee44\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.462684 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08a6aa9f-aab5-4565-8db9-aacd187eee44-run-httpd\") pod \"08a6aa9f-aab5-4565-8db9-aacd187eee44\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.462778 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-combined-ca-bundle\") pod \"08a6aa9f-aab5-4565-8db9-aacd187eee44\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.462917 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmq4w\" (UniqueName: \"kubernetes.io/projected/08a6aa9f-aab5-4565-8db9-aacd187eee44-kube-api-access-dmq4w\") pod \"08a6aa9f-aab5-4565-8db9-aacd187eee44\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.462977 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-scripts\") pod \"08a6aa9f-aab5-4565-8db9-aacd187eee44\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.463035 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08a6aa9f-aab5-4565-8db9-aacd187eee44-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "08a6aa9f-aab5-4565-8db9-aacd187eee44" (UID: "08a6aa9f-aab5-4565-8db9-aacd187eee44"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.463083 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-sg-core-conf-yaml\") pod \"08a6aa9f-aab5-4565-8db9-aacd187eee44\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.463142 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-config-data\") pod \"08a6aa9f-aab5-4565-8db9-aacd187eee44\" (UID: \"08a6aa9f-aab5-4565-8db9-aacd187eee44\") " Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.463185 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08a6aa9f-aab5-4565-8db9-aacd187eee44-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "08a6aa9f-aab5-4565-8db9-aacd187eee44" (UID: "08a6aa9f-aab5-4565-8db9-aacd187eee44"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.463915 4797 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08a6aa9f-aab5-4565-8db9-aacd187eee44-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.463940 4797 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/08a6aa9f-aab5-4565-8db9-aacd187eee44-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.472817 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-scripts" (OuterVolumeSpecName: "scripts") pod "08a6aa9f-aab5-4565-8db9-aacd187eee44" (UID: "08a6aa9f-aab5-4565-8db9-aacd187eee44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.482978 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a6aa9f-aab5-4565-8db9-aacd187eee44-kube-api-access-dmq4w" (OuterVolumeSpecName: "kube-api-access-dmq4w") pod "08a6aa9f-aab5-4565-8db9-aacd187eee44" (UID: "08a6aa9f-aab5-4565-8db9-aacd187eee44"). InnerVolumeSpecName "kube-api-access-dmq4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.493778 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "08a6aa9f-aab5-4565-8db9-aacd187eee44" (UID: "08a6aa9f-aab5-4565-8db9-aacd187eee44"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.548782 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08a6aa9f-aab5-4565-8db9-aacd187eee44" (UID: "08a6aa9f-aab5-4565-8db9-aacd187eee44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.565859 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmq4w\" (UniqueName: \"kubernetes.io/projected/08a6aa9f-aab5-4565-8db9-aacd187eee44-kube-api-access-dmq4w\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.565898 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.565909 4797 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.565922 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.586827 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-config-data" (OuterVolumeSpecName: "config-data") pod "08a6aa9f-aab5-4565-8db9-aacd187eee44" (UID: "08a6aa9f-aab5-4565-8db9-aacd187eee44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.667319 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a6aa9f-aab5-4565-8db9-aacd187eee44-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.789020 4797 generic.go:334] "Generic (PLEG): container finished" podID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerID="3bc454ce41440074bdee7313614f0ce6c860bb9325f33ba46142bd838dea3f9f" exitCode=0 Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.789060 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08a6aa9f-aab5-4565-8db9-aacd187eee44","Type":"ContainerDied","Data":"3bc454ce41440074bdee7313614f0ce6c860bb9325f33ba46142bd838dea3f9f"} Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.789084 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"08a6aa9f-aab5-4565-8db9-aacd187eee44","Type":"ContainerDied","Data":"1b752c996aec9da8dad42e9cfc944c47fbab47b0a130292302cbbfd06d09cd32"} Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.789101 4797 scope.go:117] "RemoveContainer" containerID="207ea8af427ea993a6d2ee9a891549207340e019a42c7d5ff0e1857c3e33a46d" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.789225 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.823337 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.825091 4797 scope.go:117] "RemoveContainer" containerID="4ad0ae024218aca23e9436c47c6d5cd43c872becf99c20becf7bd06444689ea7" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.835349 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.852616 4797 scope.go:117] "RemoveContainer" containerID="1669a540ac7cb8e01e932ff25520755fd867c89936b797d1fc262e48094d0f95" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.872116 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.872772 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="ceilometer-notification-agent" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.872798 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="ceilometer-notification-agent" Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.872817 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="sg-core" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.872825 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="sg-core" Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.872845 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="881f97b5-ecc4-4032-97f0-5cd87b06d39e" containerName="mariadb-account-create-update" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.872853 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="881f97b5-ecc4-4032-97f0-5cd87b06d39e" containerName="mariadb-account-create-update" Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.872861 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2e02e2-0ab2-4f24-8bae-e8613132a219" containerName="mariadb-database-create" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.872869 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2e02e2-0ab2-4f24-8bae-e8613132a219" containerName="mariadb-database-create" Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.872891 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db36174d-c724-45b7-a3a0-528fb5539864" containerName="mariadb-account-create-update" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.872899 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="db36174d-c724-45b7-a3a0-528fb5539864" containerName="mariadb-account-create-update" Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.872919 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d74122-c09d-42b3-9cd9-cf4a4a0b16cb" containerName="mariadb-database-create" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.872926 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d74122-c09d-42b3-9cd9-cf4a4a0b16cb" containerName="mariadb-database-create" Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.872991 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="ceilometer-central-agent" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.872999 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="ceilometer-central-agent" Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.873015 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="343f74dd-7004-4231-aa8a-381eda1790b5" containerName="mariadb-database-create" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.873022 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="343f74dd-7004-4231-aa8a-381eda1790b5" containerName="mariadb-database-create" Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.873044 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557afa65-539f-4a13-9817-34714c8dd21d" containerName="mariadb-account-create-update" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.873051 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="557afa65-539f-4a13-9817-34714c8dd21d" containerName="mariadb-account-create-update" Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.873070 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="proxy-httpd" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.873077 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="proxy-httpd" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.873297 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="881f97b5-ecc4-4032-97f0-5cd87b06d39e" containerName="mariadb-account-create-update" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.873311 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="ceilometer-notification-agent" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.873320 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d74122-c09d-42b3-9cd9-cf4a4a0b16cb" containerName="mariadb-database-create" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.873331 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="ceilometer-central-agent" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.873345 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa2e02e2-0ab2-4f24-8bae-e8613132a219" containerName="mariadb-database-create" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.873362 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="343f74dd-7004-4231-aa8a-381eda1790b5" containerName="mariadb-database-create" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.873374 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="proxy-httpd" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.873386 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="db36174d-c724-45b7-a3a0-528fb5539864" containerName="mariadb-account-create-update" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.873397 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" containerName="sg-core" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.873407 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="557afa65-539f-4a13-9817-34714c8dd21d" containerName="mariadb-account-create-update" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.875671 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.875766 4797 scope.go:117] "RemoveContainer" containerID="3bc454ce41440074bdee7313614f0ce6c860bb9325f33ba46142bd838dea3f9f" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.878998 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.879234 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.881458 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.917782 4797 scope.go:117] "RemoveContainer" containerID="207ea8af427ea993a6d2ee9a891549207340e019a42c7d5ff0e1857c3e33a46d" Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.919919 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"207ea8af427ea993a6d2ee9a891549207340e019a42c7d5ff0e1857c3e33a46d\": container with ID starting with 207ea8af427ea993a6d2ee9a891549207340e019a42c7d5ff0e1857c3e33a46d not found: ID does not exist" containerID="207ea8af427ea993a6d2ee9a891549207340e019a42c7d5ff0e1857c3e33a46d" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.919984 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"207ea8af427ea993a6d2ee9a891549207340e019a42c7d5ff0e1857c3e33a46d"} err="failed to get container status \"207ea8af427ea993a6d2ee9a891549207340e019a42c7d5ff0e1857c3e33a46d\": rpc error: code = NotFound desc = could not find container \"207ea8af427ea993a6d2ee9a891549207340e019a42c7d5ff0e1857c3e33a46d\": container with ID starting with 207ea8af427ea993a6d2ee9a891549207340e019a42c7d5ff0e1857c3e33a46d not found: ID does not exist" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.920113 4797 scope.go:117] "RemoveContainer" containerID="4ad0ae024218aca23e9436c47c6d5cd43c872becf99c20becf7bd06444689ea7" Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.920546 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ad0ae024218aca23e9436c47c6d5cd43c872becf99c20becf7bd06444689ea7\": container with ID starting with 4ad0ae024218aca23e9436c47c6d5cd43c872becf99c20becf7bd06444689ea7 not found: ID does not exist" containerID="4ad0ae024218aca23e9436c47c6d5cd43c872becf99c20becf7bd06444689ea7" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.920609 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad0ae024218aca23e9436c47c6d5cd43c872becf99c20becf7bd06444689ea7"} err="failed to get container status \"4ad0ae024218aca23e9436c47c6d5cd43c872becf99c20becf7bd06444689ea7\": rpc error: code = NotFound desc = could not find container \"4ad0ae024218aca23e9436c47c6d5cd43c872becf99c20becf7bd06444689ea7\": container with ID starting with 4ad0ae024218aca23e9436c47c6d5cd43c872becf99c20becf7bd06444689ea7 not found: ID does not exist" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.920641 4797 scope.go:117] "RemoveContainer" containerID="1669a540ac7cb8e01e932ff25520755fd867c89936b797d1fc262e48094d0f95" Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.920941 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1669a540ac7cb8e01e932ff25520755fd867c89936b797d1fc262e48094d0f95\": container with ID starting with 1669a540ac7cb8e01e932ff25520755fd867c89936b797d1fc262e48094d0f95 not found: ID does not exist" containerID="1669a540ac7cb8e01e932ff25520755fd867c89936b797d1fc262e48094d0f95" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.920970 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1669a540ac7cb8e01e932ff25520755fd867c89936b797d1fc262e48094d0f95"} err="failed to get container status \"1669a540ac7cb8e01e932ff25520755fd867c89936b797d1fc262e48094d0f95\": rpc error: code = NotFound desc = could not find container \"1669a540ac7cb8e01e932ff25520755fd867c89936b797d1fc262e48094d0f95\": container with ID starting with 1669a540ac7cb8e01e932ff25520755fd867c89936b797d1fc262e48094d0f95 not found: ID does not exist" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.920986 4797 scope.go:117] "RemoveContainer" containerID="3bc454ce41440074bdee7313614f0ce6c860bb9325f33ba46142bd838dea3f9f" Feb 16 11:27:03 crc kubenswrapper[4797]: E0216 11:27:03.921194 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc454ce41440074bdee7313614f0ce6c860bb9325f33ba46142bd838dea3f9f\": container with ID starting with 3bc454ce41440074bdee7313614f0ce6c860bb9325f33ba46142bd838dea3f9f not found: ID does not exist" containerID="3bc454ce41440074bdee7313614f0ce6c860bb9325f33ba46142bd838dea3f9f" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.921214 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc454ce41440074bdee7313614f0ce6c860bb9325f33ba46142bd838dea3f9f"} err="failed to get container status \"3bc454ce41440074bdee7313614f0ce6c860bb9325f33ba46142bd838dea3f9f\": rpc error: code = NotFound desc = could not find container \"3bc454ce41440074bdee7313614f0ce6c860bb9325f33ba46142bd838dea3f9f\": container with ID starting with 3bc454ce41440074bdee7313614f0ce6c860bb9325f33ba46142bd838dea3f9f not found: ID does not exist" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.973212 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-scripts\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.973251 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.973385 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-config-data\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.973684 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-log-httpd\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.973875 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.973942 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-run-httpd\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:03 crc kubenswrapper[4797]: I0216 11:27:03.974006 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f4dx\" (UniqueName: \"kubernetes.io/projected/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-kube-api-access-9f4dx\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.000739 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a6aa9f-aab5-4565-8db9-aacd187eee44" path="/var/lib/kubelet/pods/08a6aa9f-aab5-4565-8db9-aacd187eee44/volumes" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.075755 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.075829 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-run-httpd\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.075861 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f4dx\" (UniqueName: \"kubernetes.io/projected/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-kube-api-access-9f4dx\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.075920 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-scripts\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.075946 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.076015 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-config-data\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.076108 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-log-httpd\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.076623 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-log-httpd\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.077403 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-run-httpd\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.080446 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.081140 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-scripts\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.083545 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.084385 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-config-data\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.101617 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f4dx\" (UniqueName: \"kubernetes.io/projected/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-kube-api-access-9f4dx\") pod \"ceilometer-0\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.202951 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.513113 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.608484 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmtct\" (UniqueName: \"kubernetes.io/projected/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-kube-api-access-rmtct\") pod \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.608876 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-config-data-custom\") pod \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.608940 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-scripts\") pod \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.608965 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-etc-machine-id\") pod \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.609070 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-config-data\") pod \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.609166 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-combined-ca-bundle\") pod \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.609238 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-logs\") pod \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\" (UID: \"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9\") " Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.612143 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-logs" (OuterVolumeSpecName: "logs") pod "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" (UID: "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.620869 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-kube-api-access-rmtct" (OuterVolumeSpecName: "kube-api-access-rmtct") pod "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" (UID: "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9"). InnerVolumeSpecName "kube-api-access-rmtct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.625295 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-scripts" (OuterVolumeSpecName: "scripts") pod "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" (UID: "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.625336 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" (UID: "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.642757 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" (UID: "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.652743 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" (UID: "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.710650 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-config-data" (OuterVolumeSpecName: "config-data") pod "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" (UID: "0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.713720 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.713844 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.713866 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.713891 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmtct\" (UniqueName: \"kubernetes.io/projected/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-kube-api-access-rmtct\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.713904 4797 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.713916 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.713930 4797 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.814684 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.831851 4797 generic.go:334] "Generic (PLEG): container finished" podID="0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" containerID="3cab704f4baf21cdcdaaf1f82b43f28a1e827eef320a18861037622baaba056c" exitCode=137 Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.831945 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.831946 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9","Type":"ContainerDied","Data":"3cab704f4baf21cdcdaaf1f82b43f28a1e827eef320a18861037622baaba056c"} Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.832022 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9","Type":"ContainerDied","Data":"f1a647c5c65bb7b7e88d89f4d1082f72fa483a67e77819dc455c66b24ff4469d"} Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.832041 4797 scope.go:117] "RemoveContainer" containerID="3cab704f4baf21cdcdaaf1f82b43f28a1e827eef320a18861037622baaba056c" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.842244 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bab9c6c7-33cd-4004-8ea6-002efbec2ae8","Type":"ContainerStarted","Data":"a5aaab249dbc97c523300d50b98620f8422abf648d3adcd2196debb60b71081a"} Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.854159 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.871148 4797 scope.go:117] "RemoveContainer" containerID="a85e703cec5666cbc49a819826749b74cbd06f0dbee8a1cc394275c7b6ac00d9" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.875660 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.891403 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.919657 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.921613 4797 scope.go:117] "RemoveContainer" containerID="3cab704f4baf21cdcdaaf1f82b43f28a1e827eef320a18861037622baaba056c" Feb 16 11:27:04 crc kubenswrapper[4797]: E0216 11:27:04.921941 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cab704f4baf21cdcdaaf1f82b43f28a1e827eef320a18861037622baaba056c\": container with ID starting with 3cab704f4baf21cdcdaaf1f82b43f28a1e827eef320a18861037622baaba056c not found: ID does not exist" containerID="3cab704f4baf21cdcdaaf1f82b43f28a1e827eef320a18861037622baaba056c" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.921966 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cab704f4baf21cdcdaaf1f82b43f28a1e827eef320a18861037622baaba056c"} err="failed to get container status \"3cab704f4baf21cdcdaaf1f82b43f28a1e827eef320a18861037622baaba056c\": rpc error: code = NotFound desc = could not find container \"3cab704f4baf21cdcdaaf1f82b43f28a1e827eef320a18861037622baaba056c\": container with ID starting with 3cab704f4baf21cdcdaaf1f82b43f28a1e827eef320a18861037622baaba056c not found: ID does not exist" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.921984 4797 scope.go:117] "RemoveContainer" containerID="a85e703cec5666cbc49a819826749b74cbd06f0dbee8a1cc394275c7b6ac00d9" Feb 16 11:27:04 crc kubenswrapper[4797]: E0216 11:27:04.922138 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a85e703cec5666cbc49a819826749b74cbd06f0dbee8a1cc394275c7b6ac00d9\": container with ID starting with a85e703cec5666cbc49a819826749b74cbd06f0dbee8a1cc394275c7b6ac00d9 not found: ID does not exist" containerID="a85e703cec5666cbc49a819826749b74cbd06f0dbee8a1cc394275c7b6ac00d9" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.922151 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a85e703cec5666cbc49a819826749b74cbd06f0dbee8a1cc394275c7b6ac00d9"} err="failed to get container status \"a85e703cec5666cbc49a819826749b74cbd06f0dbee8a1cc394275c7b6ac00d9\": rpc error: code = NotFound desc = could not find container \"a85e703cec5666cbc49a819826749b74cbd06f0dbee8a1cc394275c7b6ac00d9\": container with ID starting with a85e703cec5666cbc49a819826749b74cbd06f0dbee8a1cc394275c7b6ac00d9 not found: ID does not exist" Feb 16 11:27:04 crc kubenswrapper[4797]: E0216 11:27:04.922293 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" containerName="cinder-api" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.922338 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" containerName="cinder-api" Feb 16 11:27:04 crc kubenswrapper[4797]: E0216 11:27:04.922411 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" containerName="cinder-api-log" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.922421 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" containerName="cinder-api-log" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.922747 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" containerName="cinder-api-log" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.922772 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" containerName="cinder-api" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.924505 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.926039 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.929069 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.929240 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 11:27:04 crc kubenswrapper[4797]: I0216 11:27:04.930357 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 11:27:04 crc kubenswrapper[4797]: E0216 11:27:04.937006 4797 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bdd2f48_b8b2_403c_ba61_f21a8ebca6a9.slice/crio-f1a647c5c65bb7b7e88d89f4d1082f72fa483a67e77819dc455c66b24ff4469d\": RecentStats: unable to find data in memory cache]" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.024440 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.024879 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.024912 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-config-data\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.024983 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-config-data-custom\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.025037 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.025123 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.025150 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-scripts\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.025249 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjs2w\" (UniqueName: \"kubernetes.io/projected/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-kube-api-access-jjs2w\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.025333 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-logs\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.127353 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.127411 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-scripts\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.127530 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjs2w\" (UniqueName: \"kubernetes.io/projected/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-kube-api-access-jjs2w\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.128314 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-logs\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.129813 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.129974 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.130005 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-config-data\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.130081 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-config-data-custom\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.130136 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.131303 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.131369 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-logs\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.131451 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-scripts\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.134097 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.134526 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.135045 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-config-data\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.140514 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.143279 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-config-data-custom\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.143312 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjs2w\" (UniqueName: \"kubernetes.io/projected/794ba2c1-f4d2-4580-a072-5e3089d0cd4a-kube-api-access-jjs2w\") pod \"cinder-api-0\" (UID: \"794ba2c1-f4d2-4580-a072-5e3089d0cd4a\") " pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.246109 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.758947 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.878399 4797 generic.go:334] "Generic (PLEG): container finished" podID="a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" containerID="c132e0058d25ce2d261cf11bfeef444d21c0c5ecffbbc497a6d132e050a04673" exitCode=0 Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.878510 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc766fb7b-kdzz7" event={"ID":"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c","Type":"ContainerDied","Data":"c132e0058d25ce2d261cf11bfeef444d21c0c5ecffbbc497a6d132e050a04673"} Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.881676 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"794ba2c1-f4d2-4580-a072-5e3089d0cd4a","Type":"ContainerStarted","Data":"b5a0ea5e9855f4125e9ad996ac73f1e93e44d7c7b3433691479f03d986f43ced"} Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.884378 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bab9c6c7-33cd-4004-8ea6-002efbec2ae8","Type":"ContainerStarted","Data":"c54a2d778aa8bb36ac6b788068cde7d4a984979b1b92ed0a443588930d6babcc"} Feb 16 11:27:05 crc kubenswrapper[4797]: I0216 11:27:05.970710 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.012419 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9" path="/var/lib/kubelet/pods/0bdd2f48-b8b2-403c-ba61-f21a8ebca6a9/volumes" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.054966 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-httpd-config\") pod \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.055333 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-ovndb-tls-certs\") pod \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.055407 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-288wm\" (UniqueName: \"kubernetes.io/projected/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-kube-api-access-288wm\") pod \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.055488 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-combined-ca-bundle\") pod \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.055994 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-config\") pod \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\" (UID: \"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c\") " Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.063432 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-kube-api-access-288wm" (OuterVolumeSpecName: "kube-api-access-288wm") pod "a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" (UID: "a18e4bf1-93ac-4462-8cd7-94d4a3fce54c"). InnerVolumeSpecName "kube-api-access-288wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.066335 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" (UID: "a18e4bf1-93ac-4462-8cd7-94d4a3fce54c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.125729 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-config" (OuterVolumeSpecName: "config") pod "a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" (UID: "a18e4bf1-93ac-4462-8cd7-94d4a3fce54c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.148826 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" (UID: "a18e4bf1-93ac-4462-8cd7-94d4a3fce54c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.154883 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" (UID: "a18e4bf1-93ac-4462-8cd7-94d4a3fce54c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.163782 4797 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.164175 4797 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.164256 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-288wm\" (UniqueName: \"kubernetes.io/projected/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-kube-api-access-288wm\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.164343 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.164413 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.566878 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tmss6"] Feb 16 11:27:06 crc kubenswrapper[4797]: E0216 11:27:06.594129 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" containerName="neutron-httpd" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.594203 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" containerName="neutron-httpd" Feb 16 11:27:06 crc kubenswrapper[4797]: E0216 11:27:06.594301 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" containerName="neutron-api" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.594336 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" containerName="neutron-api" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.594967 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" containerName="neutron-httpd" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.595022 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" containerName="neutron-api" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.596289 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.598073 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.598475 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rnmhh" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.599028 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.628968 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tmss6"] Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.679110 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-config-data\") pod \"nova-cell0-conductor-db-sync-tmss6\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.679390 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-scripts\") pod \"nova-cell0-conductor-db-sync-tmss6\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.679465 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp9rt\" (UniqueName: \"kubernetes.io/projected/ed21e184-023a-429d-9cda-9f23c24a84e7-kube-api-access-wp9rt\") pod \"nova-cell0-conductor-db-sync-tmss6\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.679482 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tmss6\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.783043 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-config-data\") pod \"nova-cell0-conductor-db-sync-tmss6\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.783102 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-scripts\") pod \"nova-cell0-conductor-db-sync-tmss6\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.783194 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wp9rt\" (UniqueName: \"kubernetes.io/projected/ed21e184-023a-429d-9cda-9f23c24a84e7-kube-api-access-wp9rt\") pod \"nova-cell0-conductor-db-sync-tmss6\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.783219 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tmss6\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.788301 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-config-data\") pod \"nova-cell0-conductor-db-sync-tmss6\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.788799 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tmss6\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.789680 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-scripts\") pod \"nova-cell0-conductor-db-sync-tmss6\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.814407 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wp9rt\" (UniqueName: \"kubernetes.io/projected/ed21e184-023a-429d-9cda-9f23c24a84e7-kube-api-access-wp9rt\") pod \"nova-cell0-conductor-db-sync-tmss6\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.833602 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.896034 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7dc766fb7b-kdzz7" event={"ID":"a18e4bf1-93ac-4462-8cd7-94d4a3fce54c","Type":"ContainerDied","Data":"074f386ff93c1a2bc99450160d476bdbfb6ac27f2b1b9b670fd0e46e9114d158"} Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.896087 4797 scope.go:117] "RemoveContainer" containerID="ca36d862979bc5ce0af35447b3630046aac7e584dfc56a5a3af7c9b2cea22fcc" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.896224 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7dc766fb7b-kdzz7" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.899913 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"794ba2c1-f4d2-4580-a072-5e3089d0cd4a","Type":"ContainerStarted","Data":"8ddf33b123653ab578e81c42585d73dfd7741476b4935964605197b67bf939b3"} Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.901370 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bab9c6c7-33cd-4004-8ea6-002efbec2ae8","Type":"ContainerStarted","Data":"eef252e76bddf92cde4b8b6d061180bb28cfad7f363b6eb42cdea579b7cb4e73"} Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.928211 4797 scope.go:117] "RemoveContainer" containerID="c132e0058d25ce2d261cf11bfeef444d21c0c5ecffbbc497a6d132e050a04673" Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.959056 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7dc766fb7b-kdzz7"] Feb 16 11:27:06 crc kubenswrapper[4797]: I0216 11:27:06.965247 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7dc766fb7b-kdzz7"] Feb 16 11:27:07 crc kubenswrapper[4797]: I0216 11:27:07.403771 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tmss6"] Feb 16 11:27:07 crc kubenswrapper[4797]: I0216 11:27:07.931741 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bab9c6c7-33cd-4004-8ea6-002efbec2ae8","Type":"ContainerStarted","Data":"1dcd1befbf648c3f8135e768dc202a9957ec1b14931646c3d51dd70fa0efaec9"} Feb 16 11:27:07 crc kubenswrapper[4797]: I0216 11:27:07.934939 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tmss6" event={"ID":"ed21e184-023a-429d-9cda-9f23c24a84e7","Type":"ContainerStarted","Data":"9b1a4b9a8e9b8d63d06bcb95175113c1c9c4699bff10783fab3238fede011721"} Feb 16 11:27:07 crc kubenswrapper[4797]: I0216 11:27:07.937416 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"794ba2c1-f4d2-4580-a072-5e3089d0cd4a","Type":"ContainerStarted","Data":"02dfc142217c85abc5109204d77697166ff635e5935b6ef5bf23dc3d85f92ed5"} Feb 16 11:27:07 crc kubenswrapper[4797]: I0216 11:27:07.937586 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 11:27:07 crc kubenswrapper[4797]: I0216 11:27:07.969403 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.969385064 podStartE2EDuration="3.969385064s" podCreationTimestamp="2026-02-16 11:27:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:27:07.964395738 +0000 UTC m=+1222.684580738" watchObservedRunningTime="2026-02-16 11:27:07.969385064 +0000 UTC m=+1222.689570044" Feb 16 11:27:08 crc kubenswrapper[4797]: I0216 11:27:07.998357 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a18e4bf1-93ac-4462-8cd7-94d4a3fce54c" path="/var/lib/kubelet/pods/a18e4bf1-93ac-4462-8cd7-94d4a3fce54c/volumes" Feb 16 11:27:08 crc kubenswrapper[4797]: I0216 11:27:08.187820 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 11:27:08 crc kubenswrapper[4797]: I0216 11:27:08.188143 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 11:27:08 crc kubenswrapper[4797]: I0216 11:27:08.252354 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 11:27:08 crc kubenswrapper[4797]: I0216 11:27:08.296719 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 11:27:08 crc kubenswrapper[4797]: I0216 11:27:08.953259 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bab9c6c7-33cd-4004-8ea6-002efbec2ae8","Type":"ContainerStarted","Data":"d903e352396d014804bfc3836d3881092cb6a7209cc1c6f7e54a6c0187d97125"} Feb 16 11:27:08 crc kubenswrapper[4797]: I0216 11:27:08.955044 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 11:27:08 crc kubenswrapper[4797]: I0216 11:27:08.954770 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="ceilometer-central-agent" containerID="cri-o://c54a2d778aa8bb36ac6b788068cde7d4a984979b1b92ed0a443588930d6babcc" gracePeriod=30 Feb 16 11:27:08 crc kubenswrapper[4797]: I0216 11:27:08.954838 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="proxy-httpd" containerID="cri-o://d903e352396d014804bfc3836d3881092cb6a7209cc1c6f7e54a6c0187d97125" gracePeriod=30 Feb 16 11:27:08 crc kubenswrapper[4797]: I0216 11:27:08.954879 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="ceilometer-notification-agent" containerID="cri-o://eef252e76bddf92cde4b8b6d061180bb28cfad7f363b6eb42cdea579b7cb4e73" gracePeriod=30 Feb 16 11:27:08 crc kubenswrapper[4797]: I0216 11:27:08.954813 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="sg-core" containerID="cri-o://1dcd1befbf648c3f8135e768dc202a9957ec1b14931646c3d51dd70fa0efaec9" gracePeriod=30 Feb 16 11:27:08 crc kubenswrapper[4797]: I0216 11:27:08.955304 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 11:27:08 crc kubenswrapper[4797]: E0216 11:27:08.986495 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:27:08 crc kubenswrapper[4797]: I0216 11:27:08.990273 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.75152386 podStartE2EDuration="5.990250964s" podCreationTimestamp="2026-02-16 11:27:03 +0000 UTC" firstStartedPulling="2026-02-16 11:27:04.817908315 +0000 UTC m=+1219.538093295" lastFinishedPulling="2026-02-16 11:27:08.056635419 +0000 UTC m=+1222.776820399" observedRunningTime="2026-02-16 11:27:08.986904553 +0000 UTC m=+1223.707089533" watchObservedRunningTime="2026-02-16 11:27:08.990250964 +0000 UTC m=+1223.710435944" Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.043082 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.045167 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.072672 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.135144 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.831197 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.868316 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7fb7cc766-tfhd7" Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.954645 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5dfb479d6b-r2spn"] Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.954883 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5dfb479d6b-r2spn" podUID="c101b1b6-0ac8-4bfb-84ad-2620693178a4" containerName="placement-log" containerID="cri-o://382dd506b090917fad7cf2c7b97e73b4d1221387ae29331570b06f1eb8599925" gracePeriod=30 Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.955022 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5dfb479d6b-r2spn" podUID="c101b1b6-0ac8-4bfb-84ad-2620693178a4" containerName="placement-api" containerID="cri-o://18bfb05f115df428b9d1bc9e732a206b583a5382ef1629e3551e6399d32440e8" gracePeriod=30 Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.971036 4797 generic.go:334] "Generic (PLEG): container finished" podID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerID="d903e352396d014804bfc3836d3881092cb6a7209cc1c6f7e54a6c0187d97125" exitCode=0 Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.971067 4797 generic.go:334] "Generic (PLEG): container finished" podID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerID="1dcd1befbf648c3f8135e768dc202a9957ec1b14931646c3d51dd70fa0efaec9" exitCode=2 Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.971075 4797 generic.go:334] "Generic (PLEG): container finished" podID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerID="eef252e76bddf92cde4b8b6d061180bb28cfad7f363b6eb42cdea579b7cb4e73" exitCode=0 Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.972715 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bab9c6c7-33cd-4004-8ea6-002efbec2ae8","Type":"ContainerDied","Data":"d903e352396d014804bfc3836d3881092cb6a7209cc1c6f7e54a6c0187d97125"} Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.972778 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bab9c6c7-33cd-4004-8ea6-002efbec2ae8","Type":"ContainerDied","Data":"1dcd1befbf648c3f8135e768dc202a9957ec1b14931646c3d51dd70fa0efaec9"} Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.972798 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bab9c6c7-33cd-4004-8ea6-002efbec2ae8","Type":"ContainerDied","Data":"eef252e76bddf92cde4b8b6d061180bb28cfad7f363b6eb42cdea579b7cb4e73"} Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.973645 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 11:27:09 crc kubenswrapper[4797]: I0216 11:27:09.973968 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 11:27:10 crc kubenswrapper[4797]: I0216 11:27:10.986319 4797 generic.go:334] "Generic (PLEG): container finished" podID="c101b1b6-0ac8-4bfb-84ad-2620693178a4" containerID="382dd506b090917fad7cf2c7b97e73b4d1221387ae29331570b06f1eb8599925" exitCode=143 Feb 16 11:27:10 crc kubenswrapper[4797]: I0216 11:27:10.986400 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dfb479d6b-r2spn" event={"ID":"c101b1b6-0ac8-4bfb-84ad-2620693178a4","Type":"ContainerDied","Data":"382dd506b090917fad7cf2c7b97e73b4d1221387ae29331570b06f1eb8599925"} Feb 16 11:27:10 crc kubenswrapper[4797]: I0216 11:27:10.986770 4797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 11:27:10 crc kubenswrapper[4797]: I0216 11:27:10.986785 4797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 11:27:11 crc kubenswrapper[4797]: I0216 11:27:11.272165 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 11:27:11 crc kubenswrapper[4797]: I0216 11:27:11.273292 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 11:27:11 crc kubenswrapper[4797]: I0216 11:27:11.704105 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:27:11 crc kubenswrapper[4797]: I0216 11:27:11.704470 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:27:11 crc kubenswrapper[4797]: I0216 11:27:11.704524 4797 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:27:11 crc kubenswrapper[4797]: I0216 11:27:11.705754 4797 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ba3093423333884d09bb1138cadcee536dc44a6bdfca7536ddc371719d3f0a4a"} pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:27:11 crc kubenswrapper[4797]: I0216 11:27:11.705828 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" containerID="cri-o://ba3093423333884d09bb1138cadcee536dc44a6bdfca7536ddc371719d3f0a4a" gracePeriod=600 Feb 16 11:27:12 crc kubenswrapper[4797]: I0216 11:27:12.005345 4797 generic.go:334] "Generic (PLEG): container finished" podID="128f4e85-fd17-4281-97d2-872fda792b21" containerID="ba3093423333884d09bb1138cadcee536dc44a6bdfca7536ddc371719d3f0a4a" exitCode=0 Feb 16 11:27:12 crc kubenswrapper[4797]: I0216 11:27:12.005405 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerDied","Data":"ba3093423333884d09bb1138cadcee536dc44a6bdfca7536ddc371719d3f0a4a"} Feb 16 11:27:12 crc kubenswrapper[4797]: I0216 11:27:12.005446 4797 scope.go:117] "RemoveContainer" containerID="dbd4bdb440a4910da1233a40f8d1a68f6e489c128c5069841c232e62039aea64" Feb 16 11:27:12 crc kubenswrapper[4797]: I0216 11:27:12.005558 4797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 11:27:12 crc kubenswrapper[4797]: I0216 11:27:12.005566 4797 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 11:27:12 crc kubenswrapper[4797]: I0216 11:27:12.019569 4797 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podb3eaa18d-dc3e-4499-b37e-58ff7449745f"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podb3eaa18d-dc3e-4499-b37e-58ff7449745f] : Timed out while waiting for systemd to remove kubepods-besteffort-podb3eaa18d_dc3e_4499_b37e_58ff7449745f.slice" Feb 16 11:27:12 crc kubenswrapper[4797]: E0216 11:27:12.019666 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort podb3eaa18d-dc3e-4499-b37e-58ff7449745f] : unable to destroy cgroup paths for cgroup [kubepods besteffort podb3eaa18d-dc3e-4499-b37e-58ff7449745f] : Timed out while waiting for systemd to remove kubepods-besteffort-podb3eaa18d_dc3e_4499_b37e_58ff7449745f.slice" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" podUID="b3eaa18d-dc3e-4499-b37e-58ff7449745f" Feb 16 11:27:12 crc kubenswrapper[4797]: I0216 11:27:12.211193 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 11:27:12 crc kubenswrapper[4797]: I0216 11:27:12.498281 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 11:27:13 crc kubenswrapper[4797]: I0216 11:27:13.020961 4797 generic.go:334] "Generic (PLEG): container finished" podID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerID="c54a2d778aa8bb36ac6b788068cde7d4a984979b1b92ed0a443588930d6babcc" exitCode=0 Feb 16 11:27:13 crc kubenswrapper[4797]: I0216 11:27:13.021272 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-gl82w" Feb 16 11:27:13 crc kubenswrapper[4797]: I0216 11:27:13.021047 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bab9c6c7-33cd-4004-8ea6-002efbec2ae8","Type":"ContainerDied","Data":"c54a2d778aa8bb36ac6b788068cde7d4a984979b1b92ed0a443588930d6babcc"} Feb 16 11:27:13 crc kubenswrapper[4797]: I0216 11:27:13.096721 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-gl82w"] Feb 16 11:27:13 crc kubenswrapper[4797]: I0216 11:27:13.108597 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-gl82w"] Feb 16 11:27:13 crc kubenswrapper[4797]: I0216 11:27:13.995383 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3eaa18d-dc3e-4499-b37e-58ff7449745f" path="/var/lib/kubelet/pods/b3eaa18d-dc3e-4499-b37e-58ff7449745f/volumes" Feb 16 11:27:14 crc kubenswrapper[4797]: I0216 11:27:14.034728 4797 generic.go:334] "Generic (PLEG): container finished" podID="c101b1b6-0ac8-4bfb-84ad-2620693178a4" containerID="18bfb05f115df428b9d1bc9e732a206b583a5382ef1629e3551e6399d32440e8" exitCode=0 Feb 16 11:27:14 crc kubenswrapper[4797]: I0216 11:27:14.034823 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dfb479d6b-r2spn" event={"ID":"c101b1b6-0ac8-4bfb-84ad-2620693178a4","Type":"ContainerDied","Data":"18bfb05f115df428b9d1bc9e732a206b583a5382ef1629e3551e6399d32440e8"} Feb 16 11:27:17 crc kubenswrapper[4797]: I0216 11:27:17.251267 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.567605 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.664412 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-config-data\") pod \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.664476 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-sg-core-conf-yaml\") pod \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.664523 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f4dx\" (UniqueName: \"kubernetes.io/projected/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-kube-api-access-9f4dx\") pod \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.664544 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-combined-ca-bundle\") pod \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.664671 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-run-httpd\") pod \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.664705 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-log-httpd\") pod \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.664733 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-scripts\") pod \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\" (UID: \"bab9c6c7-33cd-4004-8ea6-002efbec2ae8\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.665467 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bab9c6c7-33cd-4004-8ea6-002efbec2ae8" (UID: "bab9c6c7-33cd-4004-8ea6-002efbec2ae8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.665604 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bab9c6c7-33cd-4004-8ea6-002efbec2ae8" (UID: "bab9c6c7-33cd-4004-8ea6-002efbec2ae8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.670162 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-kube-api-access-9f4dx" (OuterVolumeSpecName: "kube-api-access-9f4dx") pod "bab9c6c7-33cd-4004-8ea6-002efbec2ae8" (UID: "bab9c6c7-33cd-4004-8ea6-002efbec2ae8"). InnerVolumeSpecName "kube-api-access-9f4dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.670511 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-scripts" (OuterVolumeSpecName: "scripts") pod "bab9c6c7-33cd-4004-8ea6-002efbec2ae8" (UID: "bab9c6c7-33cd-4004-8ea6-002efbec2ae8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.710845 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bab9c6c7-33cd-4004-8ea6-002efbec2ae8" (UID: "bab9c6c7-33cd-4004-8ea6-002efbec2ae8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.753979 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bab9c6c7-33cd-4004-8ea6-002efbec2ae8" (UID: "bab9c6c7-33cd-4004-8ea6-002efbec2ae8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.767059 4797 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.767093 4797 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.767103 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.767114 4797 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.767127 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f4dx\" (UniqueName: \"kubernetes.io/projected/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-kube-api-access-9f4dx\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.767137 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.772899 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-config-data" (OuterVolumeSpecName: "config-data") pod "bab9c6c7-33cd-4004-8ea6-002efbec2ae8" (UID: "bab9c6c7-33cd-4004-8ea6-002efbec2ae8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.858125 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.872127 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bab9c6c7-33cd-4004-8ea6-002efbec2ae8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.974120 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-combined-ca-bundle\") pod \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.974762 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-public-tls-certs\") pod \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.974961 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-internal-tls-certs\") pod \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.975076 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-config-data\") pod \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.975176 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm7xg\" (UniqueName: \"kubernetes.io/projected/c101b1b6-0ac8-4bfb-84ad-2620693178a4-kube-api-access-nm7xg\") pod \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.975234 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-scripts\") pod \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.975315 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c101b1b6-0ac8-4bfb-84ad-2620693178a4-logs\") pod \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\" (UID: \"c101b1b6-0ac8-4bfb-84ad-2620693178a4\") " Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.976629 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c101b1b6-0ac8-4bfb-84ad-2620693178a4-logs" (OuterVolumeSpecName: "logs") pod "c101b1b6-0ac8-4bfb-84ad-2620693178a4" (UID: "c101b1b6-0ac8-4bfb-84ad-2620693178a4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.979128 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c101b1b6-0ac8-4bfb-84ad-2620693178a4-kube-api-access-nm7xg" (OuterVolumeSpecName: "kube-api-access-nm7xg") pod "c101b1b6-0ac8-4bfb-84ad-2620693178a4" (UID: "c101b1b6-0ac8-4bfb-84ad-2620693178a4"). InnerVolumeSpecName "kube-api-access-nm7xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:18 crc kubenswrapper[4797]: I0216 11:27:18.980427 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-scripts" (OuterVolumeSpecName: "scripts") pod "c101b1b6-0ac8-4bfb-84ad-2620693178a4" (UID: "c101b1b6-0ac8-4bfb-84ad-2620693178a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.036681 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-config-data" (OuterVolumeSpecName: "config-data") pod "c101b1b6-0ac8-4bfb-84ad-2620693178a4" (UID: "c101b1b6-0ac8-4bfb-84ad-2620693178a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.051465 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c101b1b6-0ac8-4bfb-84ad-2620693178a4" (UID: "c101b1b6-0ac8-4bfb-84ad-2620693178a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.077668 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nm7xg\" (UniqueName: \"kubernetes.io/projected/c101b1b6-0ac8-4bfb-84ad-2620693178a4-kube-api-access-nm7xg\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.077705 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.077715 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c101b1b6-0ac8-4bfb-84ad-2620693178a4-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.077724 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.077732 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.088349 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c101b1b6-0ac8-4bfb-84ad-2620693178a4" (UID: "c101b1b6-0ac8-4bfb-84ad-2620693178a4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.088759 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c101b1b6-0ac8-4bfb-84ad-2620693178a4" (UID: "c101b1b6-0ac8-4bfb-84ad-2620693178a4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.090059 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5dfb479d6b-r2spn" event={"ID":"c101b1b6-0ac8-4bfb-84ad-2620693178a4","Type":"ContainerDied","Data":"d0bc0c70726d980f3ad0e05ae9761c1659a852d626263da2ec3f01a891487015"} Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.090116 4797 scope.go:117] "RemoveContainer" containerID="18bfb05f115df428b9d1bc9e732a206b583a5382ef1629e3551e6399d32440e8" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.090082 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5dfb479d6b-r2spn" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.097342 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bab9c6c7-33cd-4004-8ea6-002efbec2ae8","Type":"ContainerDied","Data":"a5aaab249dbc97c523300d50b98620f8422abf648d3adcd2196debb60b71081a"} Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.097519 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.136787 4797 scope.go:117] "RemoveContainer" containerID="382dd506b090917fad7cf2c7b97e73b4d1221387ae29331570b06f1eb8599925" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.144781 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.157031 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.171558 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5dfb479d6b-r2spn"] Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.179000 4797 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.179030 4797 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c101b1b6-0ac8-4bfb-84ad-2620693178a4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.187369 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5dfb479d6b-r2spn"] Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.192878 4797 scope.go:117] "RemoveContainer" containerID="d903e352396d014804bfc3836d3881092cb6a7209cc1c6f7e54a6c0187d97125" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.219655 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:19 crc kubenswrapper[4797]: E0216 11:27:19.220177 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="sg-core" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.220201 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="sg-core" Feb 16 11:27:19 crc kubenswrapper[4797]: E0216 11:27:19.220216 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c101b1b6-0ac8-4bfb-84ad-2620693178a4" containerName="placement-api" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.220224 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="c101b1b6-0ac8-4bfb-84ad-2620693178a4" containerName="placement-api" Feb 16 11:27:19 crc kubenswrapper[4797]: E0216 11:27:19.220253 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="ceilometer-central-agent" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.220264 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="ceilometer-central-agent" Feb 16 11:27:19 crc kubenswrapper[4797]: E0216 11:27:19.220304 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c101b1b6-0ac8-4bfb-84ad-2620693178a4" containerName="placement-log" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.220311 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="c101b1b6-0ac8-4bfb-84ad-2620693178a4" containerName="placement-log" Feb 16 11:27:19 crc kubenswrapper[4797]: E0216 11:27:19.220323 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="ceilometer-notification-agent" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.220330 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="ceilometer-notification-agent" Feb 16 11:27:19 crc kubenswrapper[4797]: E0216 11:27:19.220346 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="proxy-httpd" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.220354 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="proxy-httpd" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.220609 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="c101b1b6-0ac8-4bfb-84ad-2620693178a4" containerName="placement-api" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.220635 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="ceilometer-notification-agent" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.220649 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="sg-core" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.220664 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="ceilometer-central-agent" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.220675 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" containerName="proxy-httpd" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.220685 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="c101b1b6-0ac8-4bfb-84ad-2620693178a4" containerName="placement-log" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.223453 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.226073 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.226833 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.232385 4797 scope.go:117] "RemoveContainer" containerID="1dcd1befbf648c3f8135e768dc202a9957ec1b14931646c3d51dd70fa0efaec9" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.234121 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.254739 4797 scope.go:117] "RemoveContainer" containerID="eef252e76bddf92cde4b8b6d061180bb28cfad7f363b6eb42cdea579b7cb4e73" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.282423 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llf2v\" (UniqueName: \"kubernetes.io/projected/e283a412-6f1b-407b-83fe-f6adcd0d1456-kube-api-access-llf2v\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.282608 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.282656 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.282780 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-scripts\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.282876 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e283a412-6f1b-407b-83fe-f6adcd0d1456-log-httpd\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.282986 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-config-data\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.283110 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e283a412-6f1b-407b-83fe-f6adcd0d1456-run-httpd\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.288954 4797 scope.go:117] "RemoveContainer" containerID="c54a2d778aa8bb36ac6b788068cde7d4a984979b1b92ed0a443588930d6babcc" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.389879 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-scripts\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.389974 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e283a412-6f1b-407b-83fe-f6adcd0d1456-log-httpd\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.390042 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-config-data\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.390115 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e283a412-6f1b-407b-83fe-f6adcd0d1456-run-httpd\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.390198 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llf2v\" (UniqueName: \"kubernetes.io/projected/e283a412-6f1b-407b-83fe-f6adcd0d1456-kube-api-access-llf2v\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.390270 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.390301 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.391615 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e283a412-6f1b-407b-83fe-f6adcd0d1456-run-httpd\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.391699 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e283a412-6f1b-407b-83fe-f6adcd0d1456-log-httpd\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.394396 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.394570 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.394641 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-scripts\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.395880 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-config-data\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.412178 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llf2v\" (UniqueName: \"kubernetes.io/projected/e283a412-6f1b-407b-83fe-f6adcd0d1456-kube-api-access-llf2v\") pod \"ceilometer-0\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.541974 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.994383 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab9c6c7-33cd-4004-8ea6-002efbec2ae8" path="/var/lib/kubelet/pods/bab9c6c7-33cd-4004-8ea6-002efbec2ae8/volumes" Feb 16 11:27:19 crc kubenswrapper[4797]: I0216 11:27:19.995998 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c101b1b6-0ac8-4bfb-84ad-2620693178a4" path="/var/lib/kubelet/pods/c101b1b6-0ac8-4bfb-84ad-2620693178a4/volumes" Feb 16 11:27:20 crc kubenswrapper[4797]: W0216 11:27:20.019869 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode283a412_6f1b_407b_83fe_f6adcd0d1456.slice/crio-61b3ee3d2b34fd45d388078a5f9cd1e9a218e0f1ed675d082966cb902eaf8cc0 WatchSource:0}: Error finding container 61b3ee3d2b34fd45d388078a5f9cd1e9a218e0f1ed675d082966cb902eaf8cc0: Status 404 returned error can't find the container with id 61b3ee3d2b34fd45d388078a5f9cd1e9a218e0f1ed675d082966cb902eaf8cc0 Feb 16 11:27:20 crc kubenswrapper[4797]: I0216 11:27:20.021690 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:20 crc kubenswrapper[4797]: I0216 11:27:20.108083 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e283a412-6f1b-407b-83fe-f6adcd0d1456","Type":"ContainerStarted","Data":"61b3ee3d2b34fd45d388078a5f9cd1e9a218e0f1ed675d082966cb902eaf8cc0"} Feb 16 11:27:20 crc kubenswrapper[4797]: I0216 11:27:20.112379 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"4adc9a4c2c83159ff79fb70bc5ca20f35b8b3e0651a933445365cd8743b4c78b"} Feb 16 11:27:20 crc kubenswrapper[4797]: E0216 11:27:20.984362 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:27:21 crc kubenswrapper[4797]: I0216 11:27:21.127269 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e283a412-6f1b-407b-83fe-f6adcd0d1456","Type":"ContainerStarted","Data":"5c7c9f3ed81231ad89e7d3e9819c24093d4551848f3a0dee4a07a8c48f2646b0"} Feb 16 11:27:21 crc kubenswrapper[4797]: I0216 11:27:21.129151 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tmss6" event={"ID":"ed21e184-023a-429d-9cda-9f23c24a84e7","Type":"ContainerStarted","Data":"69c3b2a8783e3241c7aa07cb63c33e161076d61bf61b41d6fd6bf43af290e988"} Feb 16 11:27:21 crc kubenswrapper[4797]: I0216 11:27:21.151801 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-tmss6" podStartSLOduration=2.639519906 podStartE2EDuration="15.151779072s" podCreationTimestamp="2026-02-16 11:27:06 +0000 UTC" firstStartedPulling="2026-02-16 11:27:07.414270498 +0000 UTC m=+1222.134455478" lastFinishedPulling="2026-02-16 11:27:19.926529664 +0000 UTC m=+1234.646714644" observedRunningTime="2026-02-16 11:27:21.144803121 +0000 UTC m=+1235.864988101" watchObservedRunningTime="2026-02-16 11:27:21.151779072 +0000 UTC m=+1235.871964052" Feb 16 11:27:22 crc kubenswrapper[4797]: I0216 11:27:22.142022 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e283a412-6f1b-407b-83fe-f6adcd0d1456","Type":"ContainerStarted","Data":"eee129c076b870d469205fc5bd6d664bb4036911e5524eb52ad8b6c82718eb69"} Feb 16 11:27:23 crc kubenswrapper[4797]: I0216 11:27:23.154529 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e283a412-6f1b-407b-83fe-f6adcd0d1456","Type":"ContainerStarted","Data":"91d34a65c9989e1b43406325934a387dd1d20aab5f2edb04062f69f98f6ea9f2"} Feb 16 11:27:25 crc kubenswrapper[4797]: I0216 11:27:25.177962 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e283a412-6f1b-407b-83fe-f6adcd0d1456","Type":"ContainerStarted","Data":"3b76d06b50a4e3717927a7f3d3a2c0886a5b44132ae16260aa281e2bd5207510"} Feb 16 11:27:25 crc kubenswrapper[4797]: I0216 11:27:25.178553 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 11:27:25 crc kubenswrapper[4797]: I0216 11:27:25.203596 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.852951092 podStartE2EDuration="6.203568025s" podCreationTimestamp="2026-02-16 11:27:19 +0000 UTC" firstStartedPulling="2026-02-16 11:27:20.021619454 +0000 UTC m=+1234.741804434" lastFinishedPulling="2026-02-16 11:27:24.372236367 +0000 UTC m=+1239.092421367" observedRunningTime="2026-02-16 11:27:25.201699463 +0000 UTC m=+1239.921884463" watchObservedRunningTime="2026-02-16 11:27:25.203568025 +0000 UTC m=+1239.923753005" Feb 16 11:27:31 crc kubenswrapper[4797]: E0216 11:27:31.985667 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:27:32 crc kubenswrapper[4797]: I0216 11:27:32.250021 4797 generic.go:334] "Generic (PLEG): container finished" podID="ed21e184-023a-429d-9cda-9f23c24a84e7" containerID="69c3b2a8783e3241c7aa07cb63c33e161076d61bf61b41d6fd6bf43af290e988" exitCode=0 Feb 16 11:27:32 crc kubenswrapper[4797]: I0216 11:27:32.250056 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tmss6" event={"ID":"ed21e184-023a-429d-9cda-9f23c24a84e7","Type":"ContainerDied","Data":"69c3b2a8783e3241c7aa07cb63c33e161076d61bf61b41d6fd6bf43af290e988"} Feb 16 11:27:33 crc kubenswrapper[4797]: I0216 11:27:33.702909 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:33 crc kubenswrapper[4797]: I0216 11:27:33.791552 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wp9rt\" (UniqueName: \"kubernetes.io/projected/ed21e184-023a-429d-9cda-9f23c24a84e7-kube-api-access-wp9rt\") pod \"ed21e184-023a-429d-9cda-9f23c24a84e7\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " Feb 16 11:27:33 crc kubenswrapper[4797]: I0216 11:27:33.791880 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-combined-ca-bundle\") pod \"ed21e184-023a-429d-9cda-9f23c24a84e7\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " Feb 16 11:27:33 crc kubenswrapper[4797]: I0216 11:27:33.792009 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-scripts\") pod \"ed21e184-023a-429d-9cda-9f23c24a84e7\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " Feb 16 11:27:33 crc kubenswrapper[4797]: I0216 11:27:33.792142 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-config-data\") pod \"ed21e184-023a-429d-9cda-9f23c24a84e7\" (UID: \"ed21e184-023a-429d-9cda-9f23c24a84e7\") " Feb 16 11:27:33 crc kubenswrapper[4797]: I0216 11:27:33.798783 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-scripts" (OuterVolumeSpecName: "scripts") pod "ed21e184-023a-429d-9cda-9f23c24a84e7" (UID: "ed21e184-023a-429d-9cda-9f23c24a84e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:33 crc kubenswrapper[4797]: I0216 11:27:33.798862 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed21e184-023a-429d-9cda-9f23c24a84e7-kube-api-access-wp9rt" (OuterVolumeSpecName: "kube-api-access-wp9rt") pod "ed21e184-023a-429d-9cda-9f23c24a84e7" (UID: "ed21e184-023a-429d-9cda-9f23c24a84e7"). InnerVolumeSpecName "kube-api-access-wp9rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:33 crc kubenswrapper[4797]: I0216 11:27:33.835321 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-config-data" (OuterVolumeSpecName: "config-data") pod "ed21e184-023a-429d-9cda-9f23c24a84e7" (UID: "ed21e184-023a-429d-9cda-9f23c24a84e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:33 crc kubenswrapper[4797]: I0216 11:27:33.838138 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed21e184-023a-429d-9cda-9f23c24a84e7" (UID: "ed21e184-023a-429d-9cda-9f23c24a84e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:33 crc kubenswrapper[4797]: I0216 11:27:33.894373 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:33 crc kubenswrapper[4797]: I0216 11:27:33.894429 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:33 crc kubenswrapper[4797]: I0216 11:27:33.894442 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed21e184-023a-429d-9cda-9f23c24a84e7-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:33 crc kubenswrapper[4797]: I0216 11:27:33.894455 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wp9rt\" (UniqueName: \"kubernetes.io/projected/ed21e184-023a-429d-9cda-9f23c24a84e7-kube-api-access-wp9rt\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.275145 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tmss6" event={"ID":"ed21e184-023a-429d-9cda-9f23c24a84e7","Type":"ContainerDied","Data":"9b1a4b9a8e9b8d63d06bcb95175113c1c9c4699bff10783fab3238fede011721"} Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.275187 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1a4b9a8e9b8d63d06bcb95175113c1c9c4699bff10783fab3238fede011721" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.275238 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tmss6" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.377868 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 11:27:34 crc kubenswrapper[4797]: E0216 11:27:34.378368 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed21e184-023a-429d-9cda-9f23c24a84e7" containerName="nova-cell0-conductor-db-sync" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.378398 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed21e184-023a-429d-9cda-9f23c24a84e7" containerName="nova-cell0-conductor-db-sync" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.378697 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed21e184-023a-429d-9cda-9f23c24a84e7" containerName="nova-cell0-conductor-db-sync" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.379572 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.381821 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rnmhh" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.382881 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.393989 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.401888 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ff8619-712d-4c81-bda1-db0af8c708aa-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"78ff8619-712d-4c81-bda1-db0af8c708aa\") " pod="openstack/nova-cell0-conductor-0" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.401965 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtkns\" (UniqueName: \"kubernetes.io/projected/78ff8619-712d-4c81-bda1-db0af8c708aa-kube-api-access-gtkns\") pod \"nova-cell0-conductor-0\" (UID: \"78ff8619-712d-4c81-bda1-db0af8c708aa\") " pod="openstack/nova-cell0-conductor-0" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.402128 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ff8619-712d-4c81-bda1-db0af8c708aa-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"78ff8619-712d-4c81-bda1-db0af8c708aa\") " pod="openstack/nova-cell0-conductor-0" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.503667 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ff8619-712d-4c81-bda1-db0af8c708aa-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"78ff8619-712d-4c81-bda1-db0af8c708aa\") " pod="openstack/nova-cell0-conductor-0" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.503819 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ff8619-712d-4c81-bda1-db0af8c708aa-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"78ff8619-712d-4c81-bda1-db0af8c708aa\") " pod="openstack/nova-cell0-conductor-0" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.503873 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtkns\" (UniqueName: \"kubernetes.io/projected/78ff8619-712d-4c81-bda1-db0af8c708aa-kube-api-access-gtkns\") pod \"nova-cell0-conductor-0\" (UID: \"78ff8619-712d-4c81-bda1-db0af8c708aa\") " pod="openstack/nova-cell0-conductor-0" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.508312 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78ff8619-712d-4c81-bda1-db0af8c708aa-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"78ff8619-712d-4c81-bda1-db0af8c708aa\") " pod="openstack/nova-cell0-conductor-0" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.510317 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78ff8619-712d-4c81-bda1-db0af8c708aa-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"78ff8619-712d-4c81-bda1-db0af8c708aa\") " pod="openstack/nova-cell0-conductor-0" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.524844 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtkns\" (UniqueName: \"kubernetes.io/projected/78ff8619-712d-4c81-bda1-db0af8c708aa-kube-api-access-gtkns\") pod \"nova-cell0-conductor-0\" (UID: \"78ff8619-712d-4c81-bda1-db0af8c708aa\") " pod="openstack/nova-cell0-conductor-0" Feb 16 11:27:34 crc kubenswrapper[4797]: I0216 11:27:34.700155 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 11:27:35 crc kubenswrapper[4797]: I0216 11:27:35.153138 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 11:27:35 crc kubenswrapper[4797]: W0216 11:27:35.164819 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78ff8619_712d_4c81_bda1_db0af8c708aa.slice/crio-04b2147577e863e281b8e5acef09368c22fecd78b6c106e18bcafc1d7c8b40ff WatchSource:0}: Error finding container 04b2147577e863e281b8e5acef09368c22fecd78b6c106e18bcafc1d7c8b40ff: Status 404 returned error can't find the container with id 04b2147577e863e281b8e5acef09368c22fecd78b6c106e18bcafc1d7c8b40ff Feb 16 11:27:35 crc kubenswrapper[4797]: I0216 11:27:35.306059 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"78ff8619-712d-4c81-bda1-db0af8c708aa","Type":"ContainerStarted","Data":"04b2147577e863e281b8e5acef09368c22fecd78b6c106e18bcafc1d7c8b40ff"} Feb 16 11:27:36 crc kubenswrapper[4797]: I0216 11:27:36.317831 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"78ff8619-712d-4c81-bda1-db0af8c708aa","Type":"ContainerStarted","Data":"1aa896ddf2a8e22b47199f69a7bf0e601440de7aa22ae997eade8e56ebcc51d6"} Feb 16 11:27:36 crc kubenswrapper[4797]: I0216 11:27:36.318353 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 11:27:36 crc kubenswrapper[4797]: I0216 11:27:36.338422 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.338402094 podStartE2EDuration="2.338402094s" podCreationTimestamp="2026-02-16 11:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:27:36.333196721 +0000 UTC m=+1251.053381701" watchObservedRunningTime="2026-02-16 11:27:36.338402094 +0000 UTC m=+1251.058587074" Feb 16 11:27:44 crc kubenswrapper[4797]: I0216 11:27:44.728597 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 11:27:44 crc kubenswrapper[4797]: E0216 11:27:44.987507 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.257567 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-s2mq9"] Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.259083 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.265667 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.266148 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.328146 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-s2mq9"] Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.428672 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.430383 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.435250 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.446941 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-config-data\") pod \"nova-cell0-cell-mapping-s2mq9\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.447457 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s2mq9\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.447512 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-scripts\") pod \"nova-cell0-cell-mapping-s2mq9\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.447617 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7gml\" (UniqueName: \"kubernetes.io/projected/25ccd652-9aab-49ee-bbad-cdb91133f3a6-kube-api-access-z7gml\") pod \"nova-cell0-cell-mapping-s2mq9\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.457359 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.459169 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.468105 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.482611 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.500102 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.550993 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-config-data\") pod \"nova-scheduler-0\" (UID: \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\") " pod="openstack/nova-scheduler-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.551064 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3678dbc0-cd6b-4cf7-8695-d76da81e8107-logs\") pod \"nova-api-0\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.551090 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3678dbc0-cd6b-4cf7-8695-d76da81e8107-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.551130 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-config-data\") pod \"nova-cell0-cell-mapping-s2mq9\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.551200 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s2mq9\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.551218 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twpc7\" (UniqueName: \"kubernetes.io/projected/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-kube-api-access-twpc7\") pod \"nova-scheduler-0\" (UID: \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\") " pod="openstack/nova-scheduler-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.551241 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-scripts\") pod \"nova-cell0-cell-mapping-s2mq9\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.551267 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3678dbc0-cd6b-4cf7-8695-d76da81e8107-config-data\") pod \"nova-api-0\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.551299 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7gml\" (UniqueName: \"kubernetes.io/projected/25ccd652-9aab-49ee-bbad-cdb91133f3a6-kube-api-access-z7gml\") pod \"nova-cell0-cell-mapping-s2mq9\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.551322 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5klkt\" (UniqueName: \"kubernetes.io/projected/3678dbc0-cd6b-4cf7-8695-d76da81e8107-kube-api-access-5klkt\") pod \"nova-api-0\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.551345 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\") " pod="openstack/nova-scheduler-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.558017 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-config-data\") pod \"nova-cell0-cell-mapping-s2mq9\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.559257 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.561119 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.561699 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-s2mq9\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.565602 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.565792 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-scripts\") pod \"nova-cell0-cell-mapping-s2mq9\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.609011 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7gml\" (UniqueName: \"kubernetes.io/projected/25ccd652-9aab-49ee-bbad-cdb91133f3a6-kube-api-access-z7gml\") pod \"nova-cell0-cell-mapping-s2mq9\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.625664 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.655850 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-config-data\") pod \"nova-scheduler-0\" (UID: \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\") " pod="openstack/nova-scheduler-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.655921 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4a570e-f010-42e7-8d2e-56d6e6777640-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.655958 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3678dbc0-cd6b-4cf7-8695-d76da81e8107-logs\") pod \"nova-api-0\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.655983 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3678dbc0-cd6b-4cf7-8695-d76da81e8107-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.656075 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4a570e-f010-42e7-8d2e-56d6e6777640-config-data\") pod \"nova-metadata-0\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.656688 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twpc7\" (UniqueName: \"kubernetes.io/projected/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-kube-api-access-twpc7\") pod \"nova-scheduler-0\" (UID: \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\") " pod="openstack/nova-scheduler-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.656741 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p6dq\" (UniqueName: \"kubernetes.io/projected/3f4a570e-f010-42e7-8d2e-56d6e6777640-kube-api-access-6p6dq\") pod \"nova-metadata-0\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.656770 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3678dbc0-cd6b-4cf7-8695-d76da81e8107-config-data\") pod \"nova-api-0\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.656829 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5klkt\" (UniqueName: \"kubernetes.io/projected/3678dbc0-cd6b-4cf7-8695-d76da81e8107-kube-api-access-5klkt\") pod \"nova-api-0\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.656864 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\") " pod="openstack/nova-scheduler-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.656884 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f4a570e-f010-42e7-8d2e-56d6e6777640-logs\") pod \"nova-metadata-0\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.656542 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3678dbc0-cd6b-4cf7-8695-d76da81e8107-logs\") pod \"nova-api-0\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.659283 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-config-data\") pod \"nova-scheduler-0\" (UID: \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\") " pod="openstack/nova-scheduler-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.662895 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\") " pod="openstack/nova-scheduler-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.665019 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3678dbc0-cd6b-4cf7-8695-d76da81e8107-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.674669 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.676211 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.687077 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.688147 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.691230 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3678dbc0-cd6b-4cf7-8695-d76da81e8107-config-data\") pod \"nova-api-0\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.696761 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5klkt\" (UniqueName: \"kubernetes.io/projected/3678dbc0-cd6b-4cf7-8695-d76da81e8107-kube-api-access-5klkt\") pod \"nova-api-0\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.704206 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twpc7\" (UniqueName: \"kubernetes.io/projected/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-kube-api-access-twpc7\") pod \"nova-scheduler-0\" (UID: \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\") " pod="openstack/nova-scheduler-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.704872 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-jm6gb"] Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.706602 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.762887 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f4a570e-f010-42e7-8d2e-56d6e6777640-logs\") pod \"nova-metadata-0\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.763520 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4a570e-f010-42e7-8d2e-56d6e6777640-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.763754 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4a570e-f010-42e7-8d2e-56d6e6777640-config-data\") pod \"nova-metadata-0\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.763903 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p6dq\" (UniqueName: \"kubernetes.io/projected/3f4a570e-f010-42e7-8d2e-56d6e6777640-kube-api-access-6p6dq\") pod \"nova-metadata-0\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.766088 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f4a570e-f010-42e7-8d2e-56d6e6777640-logs\") pod \"nova-metadata-0\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.779359 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4a570e-f010-42e7-8d2e-56d6e6777640-config-data\") pod \"nova-metadata-0\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.779545 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4a570e-f010-42e7-8d2e-56d6e6777640-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.779565 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-jm6gb"] Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.790429 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.810067 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.817616 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p6dq\" (UniqueName: \"kubernetes.io/projected/3f4a570e-f010-42e7-8d2e-56d6e6777640-kube-api-access-6p6dq\") pod \"nova-metadata-0\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.866019 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-config\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.866071 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.866100 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.866130 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.866168 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84glt\" (UniqueName: \"kubernetes.io/projected/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-kube-api-access-84glt\") pod \"nova-cell1-novncproxy-0\" (UID: \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.866190 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.866217 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.866238 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6ztw\" (UniqueName: \"kubernetes.io/projected/13a4aa0f-f231-4931-b9c4-78f032d96d5f-kube-api-access-x6ztw\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.866272 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.882980 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.886600 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.967702 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.967770 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.967804 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.967844 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84glt\" (UniqueName: \"kubernetes.io/projected/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-kube-api-access-84glt\") pod \"nova-cell1-novncproxy-0\" (UID: \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.967862 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.967894 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.967911 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6ztw\" (UniqueName: \"kubernetes.io/projected/13a4aa0f-f231-4931-b9c4-78f032d96d5f-kube-api-access-x6ztw\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.967946 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.968022 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-config\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.968807 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-config\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.969544 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.970038 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-dns-svc\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.972745 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.974124 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.981223 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.990130 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:27:45 crc kubenswrapper[4797]: I0216 11:27:45.998157 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.007600 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84glt\" (UniqueName: \"kubernetes.io/projected/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-kube-api-access-84glt\") pod \"nova-cell1-novncproxy-0\" (UID: \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.037389 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6ztw\" (UniqueName: \"kubernetes.io/projected/13a4aa0f-f231-4931-b9c4-78f032d96d5f-kube-api-access-x6ztw\") pod \"dnsmasq-dns-757b4f8459-jm6gb\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.214202 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.255443 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.786980 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m7cv5"] Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.801274 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.806474 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.806715 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.830138 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m7cv5"] Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.880755 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-s2mq9"] Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.913199 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.935863 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.947940 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-scripts\") pod \"nova-cell1-conductor-db-sync-m7cv5\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.948067 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-config-data\") pod \"nova-cell1-conductor-db-sync-m7cv5\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.948114 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-m7cv5\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:46 crc kubenswrapper[4797]: I0216 11:27:46.948172 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srgkv\" (UniqueName: \"kubernetes.io/projected/982c5633-bbd8-4437-b8b5-11e3b6783e19-kube-api-access-srgkv\") pod \"nova-cell1-conductor-db-sync-m7cv5\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.050355 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-config-data\") pod \"nova-cell1-conductor-db-sync-m7cv5\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.050763 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-m7cv5\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.050889 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srgkv\" (UniqueName: \"kubernetes.io/projected/982c5633-bbd8-4437-b8b5-11e3b6783e19-kube-api-access-srgkv\") pod \"nova-cell1-conductor-db-sync-m7cv5\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.051437 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-scripts\") pod \"nova-cell1-conductor-db-sync-m7cv5\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.055414 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-m7cv5\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.056867 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-scripts\") pod \"nova-cell1-conductor-db-sync-m7cv5\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.066486 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-config-data\") pod \"nova-cell1-conductor-db-sync-m7cv5\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.070990 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srgkv\" (UniqueName: \"kubernetes.io/projected/982c5633-bbd8-4437-b8b5-11e3b6783e19-kube-api-access-srgkv\") pod \"nova-cell1-conductor-db-sync-m7cv5\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.128740 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.194689 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.220797 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-jm6gb"] Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.232190 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:27:47 crc kubenswrapper[4797]: W0216 11:27:47.244307 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f4a570e_f010_42e7_8d2e_56d6e6777640.slice/crio-cb72c429efe48e42571c1d2f7ce403aa81cbfd59141e43cde9a2d1483b16694a WatchSource:0}: Error finding container cb72c429efe48e42571c1d2f7ce403aa81cbfd59141e43cde9a2d1483b16694a: Status 404 returned error can't find the container with id cb72c429efe48e42571c1d2f7ce403aa81cbfd59141e43cde9a2d1483b16694a Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.493513 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f4a570e-f010-42e7-8d2e-56d6e6777640","Type":"ContainerStarted","Data":"cb72c429efe48e42571c1d2f7ce403aa81cbfd59141e43cde9a2d1483b16694a"} Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.495824 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" event={"ID":"13a4aa0f-f231-4931-b9c4-78f032d96d5f","Type":"ContainerStarted","Data":"57b36923d8414677db5e19a0c191714088cdedc40ef3a3d3c8ba9e771f234b18"} Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.498062 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f74c203-39da-4fe9-9cda-f5efdb0b5fad","Type":"ContainerStarted","Data":"49188170be287389d34a9963e68cacae9e7d74a696a1008c697a0b8722354376"} Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.499829 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d","Type":"ContainerStarted","Data":"a83d9716258350b60c10b26bc0bb7ab6ecf567a7ed7d2aaddd868a46a5e361be"} Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.502391 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s2mq9" event={"ID":"25ccd652-9aab-49ee-bbad-cdb91133f3a6","Type":"ContainerStarted","Data":"d12dffc4553c752285f065aa7f527e129b1ac2a2e6055a45e5b3a47397d49ca0"} Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.502429 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s2mq9" event={"ID":"25ccd652-9aab-49ee-bbad-cdb91133f3a6","Type":"ContainerStarted","Data":"1bc4680f79df7a32749d479acd95847850dc95c4f5177ccd9595923030112db6"} Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.503602 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3678dbc0-cd6b-4cf7-8695-d76da81e8107","Type":"ContainerStarted","Data":"20f420d3a5e207b82f258740e3c7634e92a8f34854ab67b89b5b4f48d72ebfa2"} Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.525568 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-s2mq9" podStartSLOduration=2.525536031 podStartE2EDuration="2.525536031s" podCreationTimestamp="2026-02-16 11:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:27:47.514468438 +0000 UTC m=+1262.234653418" watchObservedRunningTime="2026-02-16 11:27:47.525536031 +0000 UTC m=+1262.245721011" Feb 16 11:27:47 crc kubenswrapper[4797]: I0216 11:27:47.660041 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m7cv5"] Feb 16 11:27:48 crc kubenswrapper[4797]: I0216 11:27:48.547417 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m7cv5" event={"ID":"982c5633-bbd8-4437-b8b5-11e3b6783e19","Type":"ContainerStarted","Data":"4f9d3169f2f6e894bd96783bcddf67034cbf3d40979ad943b78a467f492c124b"} Feb 16 11:27:48 crc kubenswrapper[4797]: I0216 11:27:48.547740 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m7cv5" event={"ID":"982c5633-bbd8-4437-b8b5-11e3b6783e19","Type":"ContainerStarted","Data":"ff98b9ed5381a1099479702e8de8b2b0670c4cdd2333d609177a56b16e677929"} Feb 16 11:27:48 crc kubenswrapper[4797]: I0216 11:27:48.555684 4797 generic.go:334] "Generic (PLEG): container finished" podID="13a4aa0f-f231-4931-b9c4-78f032d96d5f" containerID="5841bc64475aa0c59a4fcc46ad6533f52c0b6cc2f4c6ba6eb7ac20093aca835b" exitCode=0 Feb 16 11:27:48 crc kubenswrapper[4797]: I0216 11:27:48.556266 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" event={"ID":"13a4aa0f-f231-4931-b9c4-78f032d96d5f","Type":"ContainerDied","Data":"5841bc64475aa0c59a4fcc46ad6533f52c0b6cc2f4c6ba6eb7ac20093aca835b"} Feb 16 11:27:48 crc kubenswrapper[4797]: I0216 11:27:48.579151 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-m7cv5" podStartSLOduration=2.579130195 podStartE2EDuration="2.579130195s" podCreationTimestamp="2026-02-16 11:27:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:27:48.569356988 +0000 UTC m=+1263.289541968" watchObservedRunningTime="2026-02-16 11:27:48.579130195 +0000 UTC m=+1263.299315175" Feb 16 11:27:49 crc kubenswrapper[4797]: I0216 11:27:49.553661 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 11:27:49 crc kubenswrapper[4797]: I0216 11:27:49.827650 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 11:27:49 crc kubenswrapper[4797]: I0216 11:27:49.844343 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.592121 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f74c203-39da-4fe9-9cda-f5efdb0b5fad","Type":"ContainerStarted","Data":"91d6b29d9fdc5fb4c3d6b3de768efae3fa04636acf564856f99f86dfc27c43b7"} Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.595729 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d","Type":"ContainerStarted","Data":"2374e0b87252c78fc1a3717a35a9256f740d933a57dbf1750bc32f70a3ac2f71"} Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.595852 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b635730e-ec75-48d6-b0eb-0c74bfa7ea0d" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://2374e0b87252c78fc1a3717a35a9256f740d933a57dbf1750bc32f70a3ac2f71" gracePeriod=30 Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.601718 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3678dbc0-cd6b-4cf7-8695-d76da81e8107","Type":"ContainerStarted","Data":"92ad3d4d26371917919ce97bb64c7a51ddf96a9ba70dc23e17e1f4b8987905cc"} Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.601755 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3678dbc0-cd6b-4cf7-8695-d76da81e8107","Type":"ContainerStarted","Data":"ee276d4201415c718bc24f69b6f1de382ce31c3913b50e67d9ea99a29339e05c"} Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.603878 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f4a570e-f010-42e7-8d2e-56d6e6777640","Type":"ContainerStarted","Data":"1a676fee48778595e9378840fdd12face5428ed3573778b790b377499607b24e"} Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.603914 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f4a570e-f010-42e7-8d2e-56d6e6777640","Type":"ContainerStarted","Data":"8fc379a06f1667507a54eaac6df20e3324234a5c8a2fa27a30a7fec2a392abc4"} Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.603933 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3f4a570e-f010-42e7-8d2e-56d6e6777640" containerName="nova-metadata-log" containerID="cri-o://8fc379a06f1667507a54eaac6df20e3324234a5c8a2fa27a30a7fec2a392abc4" gracePeriod=30 Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.603963 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3f4a570e-f010-42e7-8d2e-56d6e6777640" containerName="nova-metadata-metadata" containerID="cri-o://1a676fee48778595e9378840fdd12face5428ed3573778b790b377499607b24e" gracePeriod=30 Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.618528 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.987229953 podStartE2EDuration="6.61850664s" podCreationTimestamp="2026-02-16 11:27:45 +0000 UTC" firstStartedPulling="2026-02-16 11:27:46.916386137 +0000 UTC m=+1261.636571107" lastFinishedPulling="2026-02-16 11:27:50.547662814 +0000 UTC m=+1265.267847794" observedRunningTime="2026-02-16 11:27:51.613753401 +0000 UTC m=+1266.333938391" watchObservedRunningTime="2026-02-16 11:27:51.61850664 +0000 UTC m=+1266.338691620" Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.626901 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" event={"ID":"13a4aa0f-f231-4931-b9c4-78f032d96d5f","Type":"ContainerStarted","Data":"d47cca45405da8d9352199490ddecb8520d703efaef276cde64f167eed2decd2"} Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.627737 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.647524 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.340524942 podStartE2EDuration="6.647503643s" podCreationTimestamp="2026-02-16 11:27:45 +0000 UTC" firstStartedPulling="2026-02-16 11:27:47.256741072 +0000 UTC m=+1261.976926062" lastFinishedPulling="2026-02-16 11:27:50.563719783 +0000 UTC m=+1265.283904763" observedRunningTime="2026-02-16 11:27:51.632821151 +0000 UTC m=+1266.353006141" watchObservedRunningTime="2026-02-16 11:27:51.647503643 +0000 UTC m=+1266.367688623" Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.660703 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.302894933 podStartE2EDuration="6.660677403s" podCreationTimestamp="2026-02-16 11:27:45 +0000 UTC" firstStartedPulling="2026-02-16 11:27:47.191120348 +0000 UTC m=+1261.911305328" lastFinishedPulling="2026-02-16 11:27:50.548902818 +0000 UTC m=+1265.269087798" observedRunningTime="2026-02-16 11:27:51.657608399 +0000 UTC m=+1266.377793399" watchObservedRunningTime="2026-02-16 11:27:51.660677403 +0000 UTC m=+1266.380862403" Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.680927 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.047142542 podStartE2EDuration="6.680907206s" podCreationTimestamp="2026-02-16 11:27:45 +0000 UTC" firstStartedPulling="2026-02-16 11:27:46.912168412 +0000 UTC m=+1261.632353392" lastFinishedPulling="2026-02-16 11:27:50.545933036 +0000 UTC m=+1265.266118056" observedRunningTime="2026-02-16 11:27:51.676481215 +0000 UTC m=+1266.396666195" watchObservedRunningTime="2026-02-16 11:27:51.680907206 +0000 UTC m=+1266.401092186" Feb 16 11:27:51 crc kubenswrapper[4797]: I0216 11:27:51.704462 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" podStartSLOduration=6.704434139 podStartE2EDuration="6.704434139s" podCreationTimestamp="2026-02-16 11:27:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:27:51.69604333 +0000 UTC m=+1266.416228310" watchObservedRunningTime="2026-02-16 11:27:51.704434139 +0000 UTC m=+1266.424619119" Feb 16 11:27:52 crc kubenswrapper[4797]: I0216 11:27:52.640525 4797 generic.go:334] "Generic (PLEG): container finished" podID="3f4a570e-f010-42e7-8d2e-56d6e6777640" containerID="8fc379a06f1667507a54eaac6df20e3324234a5c8a2fa27a30a7fec2a392abc4" exitCode=143 Feb 16 11:27:52 crc kubenswrapper[4797]: I0216 11:27:52.640629 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f4a570e-f010-42e7-8d2e-56d6e6777640","Type":"ContainerDied","Data":"8fc379a06f1667507a54eaac6df20e3324234a5c8a2fa27a30a7fec2a392abc4"} Feb 16 11:27:54 crc kubenswrapper[4797]: I0216 11:27:54.600896 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 11:27:54 crc kubenswrapper[4797]: I0216 11:27:54.601402 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="3bf0ec48-8b5b-4671-b213-f04c4e66ad9e" containerName="kube-state-metrics" containerID="cri-o://5ea066e469629e6f40df0aa67cd848bcc0b4039029f67a5fa597a8ee5e058de0" gracePeriod=30 Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.186711 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.291306 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn8f5\" (UniqueName: \"kubernetes.io/projected/3bf0ec48-8b5b-4671-b213-f04c4e66ad9e-kube-api-access-qn8f5\") pod \"3bf0ec48-8b5b-4671-b213-f04c4e66ad9e\" (UID: \"3bf0ec48-8b5b-4671-b213-f04c4e66ad9e\") " Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.297157 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf0ec48-8b5b-4671-b213-f04c4e66ad9e-kube-api-access-qn8f5" (OuterVolumeSpecName: "kube-api-access-qn8f5") pod "3bf0ec48-8b5b-4671-b213-f04c4e66ad9e" (UID: "3bf0ec48-8b5b-4671-b213-f04c4e66ad9e"). InnerVolumeSpecName "kube-api-access-qn8f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.394426 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn8f5\" (UniqueName: \"kubernetes.io/projected/3bf0ec48-8b5b-4671-b213-f04c4e66ad9e-kube-api-access-qn8f5\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.674834 4797 generic.go:334] "Generic (PLEG): container finished" podID="3bf0ec48-8b5b-4671-b213-f04c4e66ad9e" containerID="5ea066e469629e6f40df0aa67cd848bcc0b4039029f67a5fa597a8ee5e058de0" exitCode=2 Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.674884 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3bf0ec48-8b5b-4671-b213-f04c4e66ad9e","Type":"ContainerDied","Data":"5ea066e469629e6f40df0aa67cd848bcc0b4039029f67a5fa597a8ee5e058de0"} Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.674915 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3bf0ec48-8b5b-4671-b213-f04c4e66ad9e","Type":"ContainerDied","Data":"034c9c9e1cd9ea09d8236466a263a7082c660c0c772b748bf1ea0f4ea51c231d"} Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.674913 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.674988 4797 scope.go:117] "RemoveContainer" containerID="5ea066e469629e6f40df0aa67cd848bcc0b4039029f67a5fa597a8ee5e058de0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.719887 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.723514 4797 scope.go:117] "RemoveContainer" containerID="5ea066e469629e6f40df0aa67cd848bcc0b4039029f67a5fa597a8ee5e058de0" Feb 16 11:27:55 crc kubenswrapper[4797]: E0216 11:27:55.726894 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ea066e469629e6f40df0aa67cd848bcc0b4039029f67a5fa597a8ee5e058de0\": container with ID starting with 5ea066e469629e6f40df0aa67cd848bcc0b4039029f67a5fa597a8ee5e058de0 not found: ID does not exist" containerID="5ea066e469629e6f40df0aa67cd848bcc0b4039029f67a5fa597a8ee5e058de0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.726940 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ea066e469629e6f40df0aa67cd848bcc0b4039029f67a5fa597a8ee5e058de0"} err="failed to get container status \"5ea066e469629e6f40df0aa67cd848bcc0b4039029f67a5fa597a8ee5e058de0\": rpc error: code = NotFound desc = could not find container \"5ea066e469629e6f40df0aa67cd848bcc0b4039029f67a5fa597a8ee5e058de0\": container with ID starting with 5ea066e469629e6f40df0aa67cd848bcc0b4039029f67a5fa597a8ee5e058de0 not found: ID does not exist" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.741435 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.754053 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 11:27:55 crc kubenswrapper[4797]: E0216 11:27:55.754885 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf0ec48-8b5b-4671-b213-f04c4e66ad9e" containerName="kube-state-metrics" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.754904 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf0ec48-8b5b-4671-b213-f04c4e66ad9e" containerName="kube-state-metrics" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.755191 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf0ec48-8b5b-4671-b213-f04c4e66ad9e" containerName="kube-state-metrics" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.756202 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.766393 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.771051 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.771898 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.843233 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.843270 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.843279 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.843296 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.844877 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e53487-a14d-4b7b-8e1c-66c20d76309d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"23e53487-a14d-4b7b-8e1c-66c20d76309d\") " pod="openstack/kube-state-metrics-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.844938 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e53487-a14d-4b7b-8e1c-66c20d76309d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"23e53487-a14d-4b7b-8e1c-66c20d76309d\") " pod="openstack/kube-state-metrics-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.845037 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52bs6\" (UniqueName: \"kubernetes.io/projected/23e53487-a14d-4b7b-8e1c-66c20d76309d-kube-api-access-52bs6\") pod \"kube-state-metrics-0\" (UID: \"23e53487-a14d-4b7b-8e1c-66c20d76309d\") " pod="openstack/kube-state-metrics-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.863246 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/23e53487-a14d-4b7b-8e1c-66c20d76309d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"23e53487-a14d-4b7b-8e1c-66c20d76309d\") " pod="openstack/kube-state-metrics-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.887550 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.887604 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.898935 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.964910 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e53487-a14d-4b7b-8e1c-66c20d76309d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"23e53487-a14d-4b7b-8e1c-66c20d76309d\") " pod="openstack/kube-state-metrics-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.965080 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52bs6\" (UniqueName: \"kubernetes.io/projected/23e53487-a14d-4b7b-8e1c-66c20d76309d-kube-api-access-52bs6\") pod \"kube-state-metrics-0\" (UID: \"23e53487-a14d-4b7b-8e1c-66c20d76309d\") " pod="openstack/kube-state-metrics-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.965142 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/23e53487-a14d-4b7b-8e1c-66c20d76309d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"23e53487-a14d-4b7b-8e1c-66c20d76309d\") " pod="openstack/kube-state-metrics-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.965301 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e53487-a14d-4b7b-8e1c-66c20d76309d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"23e53487-a14d-4b7b-8e1c-66c20d76309d\") " pod="openstack/kube-state-metrics-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.973483 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/23e53487-a14d-4b7b-8e1c-66c20d76309d-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"23e53487-a14d-4b7b-8e1c-66c20d76309d\") " pod="openstack/kube-state-metrics-0" Feb 16 11:27:55 crc kubenswrapper[4797]: I0216 11:27:55.994010 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/23e53487-a14d-4b7b-8e1c-66c20d76309d-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"23e53487-a14d-4b7b-8e1c-66c20d76309d\") " pod="openstack/kube-state-metrics-0" Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.003400 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52bs6\" (UniqueName: \"kubernetes.io/projected/23e53487-a14d-4b7b-8e1c-66c20d76309d-kube-api-access-52bs6\") pod \"kube-state-metrics-0\" (UID: \"23e53487-a14d-4b7b-8e1c-66c20d76309d\") " pod="openstack/kube-state-metrics-0" Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.003673 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23e53487-a14d-4b7b-8e1c-66c20d76309d-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"23e53487-a14d-4b7b-8e1c-66c20d76309d\") " pod="openstack/kube-state-metrics-0" Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.021757 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf0ec48-8b5b-4671-b213-f04c4e66ad9e" path="/var/lib/kubelet/pods/3bf0ec48-8b5b-4671-b213-f04c4e66ad9e/volumes" Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.165362 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.218018 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.257678 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.335935 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-khspv"] Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.336157 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" podUID="dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" containerName="dnsmasq-dns" containerID="cri-o://2c28b27d117a180c8980327f89df3aad3efa0ba7c2379ec434836fd99c08c365" gracePeriod=10 Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.730969 4797 generic.go:334] "Generic (PLEG): container finished" podID="dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" containerID="2c28b27d117a180c8980327f89df3aad3efa0ba7c2379ec434836fd99c08c365" exitCode=0 Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.732414 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" event={"ID":"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d","Type":"ContainerDied","Data":"2c28b27d117a180c8980327f89df3aad3efa0ba7c2379ec434836fd99c08c365"} Feb 16 11:27:56 crc kubenswrapper[4797]: W0216 11:27:56.747085 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23e53487_a14d_4b7b_8e1c_66c20d76309d.slice/crio-8c66b27445f337bb0449f2532dc5144f3295eab5be7421844ac7c1f277bdff32 WatchSource:0}: Error finding container 8c66b27445f337bb0449f2532dc5144f3295eab5be7421844ac7c1f277bdff32: Status 404 returned error can't find the container with id 8c66b27445f337bb0449f2532dc5144f3295eab5be7421844ac7c1f277bdff32 Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.748921 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.771599 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.926350 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3678dbc0-cd6b-4cf7-8695-d76da81e8107" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 11:27:56 crc kubenswrapper[4797]: I0216 11:27:56.926341 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3678dbc0-cd6b-4cf7-8695-d76da81e8107" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.142258 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.200247 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-dns-svc\") pod \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.200473 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpwsf\" (UniqueName: \"kubernetes.io/projected/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-kube-api-access-fpwsf\") pod \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.200501 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-ovsdbserver-sb\") pod \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.200538 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-dns-swift-storage-0\") pod \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.200573 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-ovsdbserver-nb\") pod \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.200667 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-config\") pod \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\" (UID: \"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d\") " Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.258934 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-kube-api-access-fpwsf" (OuterVolumeSpecName: "kube-api-access-fpwsf") pod "dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" (UID: "dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d"). InnerVolumeSpecName "kube-api-access-fpwsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.295108 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" (UID: "dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.297132 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-config" (OuterVolumeSpecName: "config") pod "dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" (UID: "dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.304148 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" (UID: "dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.304535 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.304555 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.304567 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpwsf\" (UniqueName: \"kubernetes.io/projected/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-kube-api-access-fpwsf\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.304596 4797 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.322280 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" (UID: "dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.330017 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" (UID: "dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.406869 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.406918 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.659367 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.659689 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="ceilometer-central-agent" containerID="cri-o://5c7c9f3ed81231ad89e7d3e9819c24093d4551848f3a0dee4a07a8c48f2646b0" gracePeriod=30 Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.659741 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="ceilometer-notification-agent" containerID="cri-o://eee129c076b870d469205fc5bd6d664bb4036911e5524eb52ad8b6c82718eb69" gracePeriod=30 Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.659747 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="sg-core" containerID="cri-o://91d34a65c9989e1b43406325934a387dd1d20aab5f2edb04062f69f98f6ea9f2" gracePeriod=30 Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.659807 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="proxy-httpd" containerID="cri-o://3b76d06b50a4e3717927a7f3d3a2c0886a5b44132ae16260aa281e2bd5207510" gracePeriod=30 Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.742301 4797 generic.go:334] "Generic (PLEG): container finished" podID="25ccd652-9aab-49ee-bbad-cdb91133f3a6" containerID="d12dffc4553c752285f065aa7f527e129b1ac2a2e6055a45e5b3a47397d49ca0" exitCode=0 Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.742385 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s2mq9" event={"ID":"25ccd652-9aab-49ee-bbad-cdb91133f3a6","Type":"ContainerDied","Data":"d12dffc4553c752285f065aa7f527e129b1ac2a2e6055a45e5b3a47397d49ca0"} Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.746791 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" event={"ID":"dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d","Type":"ContainerDied","Data":"2b6f131cc26f2e0e2d5632e31bac4481cfa43959f104a620996d59a57f881faf"} Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.746836 4797 scope.go:117] "RemoveContainer" containerID="2c28b27d117a180c8980327f89df3aad3efa0ba7c2379ec434836fd99c08c365" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.746852 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-khspv" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.749350 4797 generic.go:334] "Generic (PLEG): container finished" podID="982c5633-bbd8-4437-b8b5-11e3b6783e19" containerID="4f9d3169f2f6e894bd96783bcddf67034cbf3d40979ad943b78a467f492c124b" exitCode=0 Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.749420 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m7cv5" event={"ID":"982c5633-bbd8-4437-b8b5-11e3b6783e19","Type":"ContainerDied","Data":"4f9d3169f2f6e894bd96783bcddf67034cbf3d40979ad943b78a467f492c124b"} Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.758423 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"23e53487-a14d-4b7b-8e1c-66c20d76309d","Type":"ContainerStarted","Data":"56d36625e3087c29ca0f8195aa2407747b61eb9376bce7902ada7ab631681568"} Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.758485 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"23e53487-a14d-4b7b-8e1c-66c20d76309d","Type":"ContainerStarted","Data":"8c66b27445f337bb0449f2532dc5144f3295eab5be7421844ac7c1f277bdff32"} Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.758708 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.772105 4797 scope.go:117] "RemoveContainer" containerID="440518e9c4ad2edd6bc4f33e5257246ae9d79c003d368ae09be140b7dd8954ab" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.828177 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.313060134 podStartE2EDuration="2.828159497s" podCreationTimestamp="2026-02-16 11:27:55 +0000 UTC" firstStartedPulling="2026-02-16 11:27:56.759337106 +0000 UTC m=+1271.479522076" lastFinishedPulling="2026-02-16 11:27:57.274436449 +0000 UTC m=+1271.994621439" observedRunningTime="2026-02-16 11:27:57.811766219 +0000 UTC m=+1272.531951189" watchObservedRunningTime="2026-02-16 11:27:57.828159497 +0000 UTC m=+1272.548344477" Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.842316 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-khspv"] Feb 16 11:27:57 crc kubenswrapper[4797]: I0216 11:27:57.856127 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-khspv"] Feb 16 11:27:58 crc kubenswrapper[4797]: I0216 11:27:58.034664 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" path="/var/lib/kubelet/pods/dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d/volumes" Feb 16 11:27:58 crc kubenswrapper[4797]: I0216 11:27:58.797333 4797 generic.go:334] "Generic (PLEG): container finished" podID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerID="3b76d06b50a4e3717927a7f3d3a2c0886a5b44132ae16260aa281e2bd5207510" exitCode=0 Feb 16 11:27:58 crc kubenswrapper[4797]: I0216 11:27:58.797688 4797 generic.go:334] "Generic (PLEG): container finished" podID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerID="91d34a65c9989e1b43406325934a387dd1d20aab5f2edb04062f69f98f6ea9f2" exitCode=2 Feb 16 11:27:58 crc kubenswrapper[4797]: I0216 11:27:58.797701 4797 generic.go:334] "Generic (PLEG): container finished" podID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerID="eee129c076b870d469205fc5bd6d664bb4036911e5524eb52ad8b6c82718eb69" exitCode=0 Feb 16 11:27:58 crc kubenswrapper[4797]: I0216 11:27:58.797714 4797 generic.go:334] "Generic (PLEG): container finished" podID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerID="5c7c9f3ed81231ad89e7d3e9819c24093d4551848f3a0dee4a07a8c48f2646b0" exitCode=0 Feb 16 11:27:58 crc kubenswrapper[4797]: I0216 11:27:58.797534 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e283a412-6f1b-407b-83fe-f6adcd0d1456","Type":"ContainerDied","Data":"3b76d06b50a4e3717927a7f3d3a2c0886a5b44132ae16260aa281e2bd5207510"} Feb 16 11:27:58 crc kubenswrapper[4797]: I0216 11:27:58.797970 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e283a412-6f1b-407b-83fe-f6adcd0d1456","Type":"ContainerDied","Data":"91d34a65c9989e1b43406325934a387dd1d20aab5f2edb04062f69f98f6ea9f2"} Feb 16 11:27:58 crc kubenswrapper[4797]: I0216 11:27:58.797987 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e283a412-6f1b-407b-83fe-f6adcd0d1456","Type":"ContainerDied","Data":"eee129c076b870d469205fc5bd6d664bb4036911e5524eb52ad8b6c82718eb69"} Feb 16 11:27:58 crc kubenswrapper[4797]: I0216 11:27:58.797998 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e283a412-6f1b-407b-83fe-f6adcd0d1456","Type":"ContainerDied","Data":"5c7c9f3ed81231ad89e7d3e9819c24093d4551848f3a0dee4a07a8c48f2646b0"} Feb 16 11:27:58 crc kubenswrapper[4797]: E0216 11:27:58.984379 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.059105 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.154532 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-scripts\") pod \"e283a412-6f1b-407b-83fe-f6adcd0d1456\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.154878 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-sg-core-conf-yaml\") pod \"e283a412-6f1b-407b-83fe-f6adcd0d1456\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.154956 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-config-data\") pod \"e283a412-6f1b-407b-83fe-f6adcd0d1456\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.155041 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e283a412-6f1b-407b-83fe-f6adcd0d1456-run-httpd\") pod \"e283a412-6f1b-407b-83fe-f6adcd0d1456\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.155084 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e283a412-6f1b-407b-83fe-f6adcd0d1456-log-httpd\") pod \"e283a412-6f1b-407b-83fe-f6adcd0d1456\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.155186 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-combined-ca-bundle\") pod \"e283a412-6f1b-407b-83fe-f6adcd0d1456\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.155212 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llf2v\" (UniqueName: \"kubernetes.io/projected/e283a412-6f1b-407b-83fe-f6adcd0d1456-kube-api-access-llf2v\") pod \"e283a412-6f1b-407b-83fe-f6adcd0d1456\" (UID: \"e283a412-6f1b-407b-83fe-f6adcd0d1456\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.155599 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e283a412-6f1b-407b-83fe-f6adcd0d1456-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e283a412-6f1b-407b-83fe-f6adcd0d1456" (UID: "e283a412-6f1b-407b-83fe-f6adcd0d1456"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.155987 4797 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e283a412-6f1b-407b-83fe-f6adcd0d1456-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.162492 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e283a412-6f1b-407b-83fe-f6adcd0d1456-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e283a412-6f1b-407b-83fe-f6adcd0d1456" (UID: "e283a412-6f1b-407b-83fe-f6adcd0d1456"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.175756 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-scripts" (OuterVolumeSpecName: "scripts") pod "e283a412-6f1b-407b-83fe-f6adcd0d1456" (UID: "e283a412-6f1b-407b-83fe-f6adcd0d1456"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.185882 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e283a412-6f1b-407b-83fe-f6adcd0d1456-kube-api-access-llf2v" (OuterVolumeSpecName: "kube-api-access-llf2v") pod "e283a412-6f1b-407b-83fe-f6adcd0d1456" (UID: "e283a412-6f1b-407b-83fe-f6adcd0d1456"). InnerVolumeSpecName "kube-api-access-llf2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.258207 4797 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e283a412-6f1b-407b-83fe-f6adcd0d1456-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.258348 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llf2v\" (UniqueName: \"kubernetes.io/projected/e283a412-6f1b-407b-83fe-f6adcd0d1456-kube-api-access-llf2v\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.258362 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.279967 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e283a412-6f1b-407b-83fe-f6adcd0d1456" (UID: "e283a412-6f1b-407b-83fe-f6adcd0d1456"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.362725 4797 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.372850 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-config-data" (OuterVolumeSpecName: "config-data") pod "e283a412-6f1b-407b-83fe-f6adcd0d1456" (UID: "e283a412-6f1b-407b-83fe-f6adcd0d1456"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.378153 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e283a412-6f1b-407b-83fe-f6adcd0d1456" (UID: "e283a412-6f1b-407b-83fe-f6adcd0d1456"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.402926 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.410067 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.464328 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-combined-ca-bundle\") pod \"982c5633-bbd8-4437-b8b5-11e3b6783e19\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.464413 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7gml\" (UniqueName: \"kubernetes.io/projected/25ccd652-9aab-49ee-bbad-cdb91133f3a6-kube-api-access-z7gml\") pod \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.464476 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-scripts\") pod \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.464556 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-config-data\") pod \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.464644 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srgkv\" (UniqueName: \"kubernetes.io/projected/982c5633-bbd8-4437-b8b5-11e3b6783e19-kube-api-access-srgkv\") pod \"982c5633-bbd8-4437-b8b5-11e3b6783e19\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.464676 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-scripts\") pod \"982c5633-bbd8-4437-b8b5-11e3b6783e19\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.464781 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-combined-ca-bundle\") pod \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\" (UID: \"25ccd652-9aab-49ee-bbad-cdb91133f3a6\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.464830 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-config-data\") pod \"982c5633-bbd8-4437-b8b5-11e3b6783e19\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.465281 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.465300 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e283a412-6f1b-407b-83fe-f6adcd0d1456-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.468795 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-scripts" (OuterVolumeSpecName: "scripts") pod "25ccd652-9aab-49ee-bbad-cdb91133f3a6" (UID: "25ccd652-9aab-49ee-bbad-cdb91133f3a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.468864 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/982c5633-bbd8-4437-b8b5-11e3b6783e19-kube-api-access-srgkv" (OuterVolumeSpecName: "kube-api-access-srgkv") pod "982c5633-bbd8-4437-b8b5-11e3b6783e19" (UID: "982c5633-bbd8-4437-b8b5-11e3b6783e19"). InnerVolumeSpecName "kube-api-access-srgkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.468965 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ccd652-9aab-49ee-bbad-cdb91133f3a6-kube-api-access-z7gml" (OuterVolumeSpecName: "kube-api-access-z7gml") pod "25ccd652-9aab-49ee-bbad-cdb91133f3a6" (UID: "25ccd652-9aab-49ee-bbad-cdb91133f3a6"). InnerVolumeSpecName "kube-api-access-z7gml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.469936 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-scripts" (OuterVolumeSpecName: "scripts") pod "982c5633-bbd8-4437-b8b5-11e3b6783e19" (UID: "982c5633-bbd8-4437-b8b5-11e3b6783e19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: E0216 11:27:59.498276 4797 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-config-data podName:982c5633-bbd8-4437-b8b5-11e3b6783e19 nodeName:}" failed. No retries permitted until 2026-02-16 11:27:59.998245136 +0000 UTC m=+1274.718430116 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-config-data") pod "982c5633-bbd8-4437-b8b5-11e3b6783e19" (UID: "982c5633-bbd8-4437-b8b5-11e3b6783e19") : error deleting /var/lib/kubelet/pods/982c5633-bbd8-4437-b8b5-11e3b6783e19/volume-subpaths: remove /var/lib/kubelet/pods/982c5633-bbd8-4437-b8b5-11e3b6783e19/volume-subpaths: no such file or directory Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.500954 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-config-data" (OuterVolumeSpecName: "config-data") pod "25ccd652-9aab-49ee-bbad-cdb91133f3a6" (UID: "25ccd652-9aab-49ee-bbad-cdb91133f3a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.501783 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "982c5633-bbd8-4437-b8b5-11e3b6783e19" (UID: "982c5633-bbd8-4437-b8b5-11e3b6783e19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.502816 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25ccd652-9aab-49ee-bbad-cdb91133f3a6" (UID: "25ccd652-9aab-49ee-bbad-cdb91133f3a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.567542 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.567664 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.567678 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7gml\" (UniqueName: \"kubernetes.io/projected/25ccd652-9aab-49ee-bbad-cdb91133f3a6-kube-api-access-z7gml\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.567691 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.567702 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25ccd652-9aab-49ee-bbad-cdb91133f3a6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.567711 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srgkv\" (UniqueName: \"kubernetes.io/projected/982c5633-bbd8-4437-b8b5-11e3b6783e19-kube-api-access-srgkv\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.567720 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.809335 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-s2mq9" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.809401 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-s2mq9" event={"ID":"25ccd652-9aab-49ee-bbad-cdb91133f3a6","Type":"ContainerDied","Data":"1bc4680f79df7a32749d479acd95847850dc95c4f5177ccd9595923030112db6"} Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.809449 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc4680f79df7a32749d479acd95847850dc95c4f5177ccd9595923030112db6" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.813802 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e283a412-6f1b-407b-83fe-f6adcd0d1456","Type":"ContainerDied","Data":"61b3ee3d2b34fd45d388078a5f9cd1e9a218e0f1ed675d082966cb902eaf8cc0"} Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.813860 4797 scope.go:117] "RemoveContainer" containerID="3b76d06b50a4e3717927a7f3d3a2c0886a5b44132ae16260aa281e2bd5207510" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.813867 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.815934 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-m7cv5" event={"ID":"982c5633-bbd8-4437-b8b5-11e3b6783e19","Type":"ContainerDied","Data":"ff98b9ed5381a1099479702e8de8b2b0670c4cdd2333d609177a56b16e677929"} Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.815970 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff98b9ed5381a1099479702e8de8b2b0670c4cdd2333d609177a56b16e677929" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.816032 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-m7cv5" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.843370 4797 scope.go:117] "RemoveContainer" containerID="91d34a65c9989e1b43406325934a387dd1d20aab5f2edb04062f69f98f6ea9f2" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.855524 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.875959 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.885145 4797 scope.go:117] "RemoveContainer" containerID="eee129c076b870d469205fc5bd6d664bb4036911e5524eb52ad8b6c82718eb69" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.898813 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:59 crc kubenswrapper[4797]: E0216 11:27:59.899479 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" containerName="init" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.899508 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" containerName="init" Feb 16 11:27:59 crc kubenswrapper[4797]: E0216 11:27:59.899537 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25ccd652-9aab-49ee-bbad-cdb91133f3a6" containerName="nova-manage" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.899550 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="25ccd652-9aab-49ee-bbad-cdb91133f3a6" containerName="nova-manage" Feb 16 11:27:59 crc kubenswrapper[4797]: E0216 11:27:59.899564 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="ceilometer-central-agent" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.899604 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="ceilometer-central-agent" Feb 16 11:27:59 crc kubenswrapper[4797]: E0216 11:27:59.899621 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="ceilometer-notification-agent" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.899634 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="ceilometer-notification-agent" Feb 16 11:27:59 crc kubenswrapper[4797]: E0216 11:27:59.899652 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="proxy-httpd" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.899662 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="proxy-httpd" Feb 16 11:27:59 crc kubenswrapper[4797]: E0216 11:27:59.899698 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" containerName="dnsmasq-dns" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.899708 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" containerName="dnsmasq-dns" Feb 16 11:27:59 crc kubenswrapper[4797]: E0216 11:27:59.899728 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="sg-core" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.899740 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="sg-core" Feb 16 11:27:59 crc kubenswrapper[4797]: E0216 11:27:59.899768 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="982c5633-bbd8-4437-b8b5-11e3b6783e19" containerName="nova-cell1-conductor-db-sync" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.899781 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="982c5633-bbd8-4437-b8b5-11e3b6783e19" containerName="nova-cell1-conductor-db-sync" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.900108 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ccd652-9aab-49ee-bbad-cdb91133f3a6" containerName="nova-manage" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.900147 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="ceilometer-central-agent" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.900168 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfcc0ae9-70f9-4cb6-99ef-adf9d2dcaf7d" containerName="dnsmasq-dns" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.900179 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="proxy-httpd" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.900202 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="sg-core" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.900217 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" containerName="ceilometer-notification-agent" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.900236 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="982c5633-bbd8-4437-b8b5-11e3b6783e19" containerName="nova-cell1-conductor-db-sync" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.904852 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.908650 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.909470 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.909650 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.911277 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.912833 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.934434 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.936853 4797 scope.go:117] "RemoveContainer" containerID="5c7c9f3ed81231ad89e7d3e9819c24093d4551848f3a0dee4a07a8c48f2646b0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.945772 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.975839 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/739194a8-bb4c-411b-ac3b-bb08c86be5f6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"739194a8-bb4c-411b-ac3b-bb08c86be5f6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.975900 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.975988 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/739194a8-bb4c-411b-ac3b-bb08c86be5f6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"739194a8-bb4c-411b-ac3b-bb08c86be5f6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.976023 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lg2b\" (UniqueName: \"kubernetes.io/projected/739194a8-bb4c-411b-ac3b-bb08c86be5f6-kube-api-access-7lg2b\") pod \"nova-cell1-conductor-0\" (UID: \"739194a8-bb4c-411b-ac3b-bb08c86be5f6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.976077 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-config-data\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.976107 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-run-httpd\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.976122 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-568jv\" (UniqueName: \"kubernetes.io/projected/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-kube-api-access-568jv\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.976197 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.976227 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.976291 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-log-httpd\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:27:59 crc kubenswrapper[4797]: I0216 11:27:59.976311 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-scripts\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.001588 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e283a412-6f1b-407b-83fe-f6adcd0d1456" path="/var/lib/kubelet/pods/e283a412-6f1b-407b-83fe-f6adcd0d1456/volumes" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.064199 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.064243 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.064423 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4f74c203-39da-4fe9-9cda-f5efdb0b5fad" containerName="nova-scheduler-scheduler" containerID="cri-o://91d6b29d9fdc5fb4c3d6b3de768efae3fa04636acf564856f99f86dfc27c43b7" gracePeriod=30 Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.064535 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3678dbc0-cd6b-4cf7-8695-d76da81e8107" containerName="nova-api-log" containerID="cri-o://ee276d4201415c718bc24f69b6f1de382ce31c3913b50e67d9ea99a29339e05c" gracePeriod=30 Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.064613 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3678dbc0-cd6b-4cf7-8695-d76da81e8107" containerName="nova-api-api" containerID="cri-o://92ad3d4d26371917919ce97bb64c7a51ddf96a9ba70dc23e17e1f4b8987905cc" gracePeriod=30 Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.079718 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-config-data\") pod \"982c5633-bbd8-4437-b8b5-11e3b6783e19\" (UID: \"982c5633-bbd8-4437-b8b5-11e3b6783e19\") " Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.080123 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-config-data\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.080179 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-run-httpd\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.080195 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-568jv\" (UniqueName: \"kubernetes.io/projected/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-kube-api-access-568jv\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.080259 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.080291 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.080323 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-log-httpd\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.080345 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-scripts\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.080412 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/739194a8-bb4c-411b-ac3b-bb08c86be5f6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"739194a8-bb4c-411b-ac3b-bb08c86be5f6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.080480 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.081240 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-log-httpd\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.081360 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/739194a8-bb4c-411b-ac3b-bb08c86be5f6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"739194a8-bb4c-411b-ac3b-bb08c86be5f6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.081383 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lg2b\" (UniqueName: \"kubernetes.io/projected/739194a8-bb4c-411b-ac3b-bb08c86be5f6-kube-api-access-7lg2b\") pod \"nova-cell1-conductor-0\" (UID: \"739194a8-bb4c-411b-ac3b-bb08c86be5f6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.083213 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-run-httpd\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.083537 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-config-data" (OuterVolumeSpecName: "config-data") pod "982c5633-bbd8-4437-b8b5-11e3b6783e19" (UID: "982c5633-bbd8-4437-b8b5-11e3b6783e19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.087562 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.089205 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/739194a8-bb4c-411b-ac3b-bb08c86be5f6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"739194a8-bb4c-411b-ac3b-bb08c86be5f6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.090138 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-config-data\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.101697 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.101733 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-scripts\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.101725 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/739194a8-bb4c-411b-ac3b-bb08c86be5f6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"739194a8-bb4c-411b-ac3b-bb08c86be5f6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.102348 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.106886 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-568jv\" (UniqueName: \"kubernetes.io/projected/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-kube-api-access-568jv\") pod \"ceilometer-0\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.107988 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lg2b\" (UniqueName: \"kubernetes.io/projected/739194a8-bb4c-411b-ac3b-bb08c86be5f6-kube-api-access-7lg2b\") pod \"nova-cell1-conductor-0\" (UID: \"739194a8-bb4c-411b-ac3b-bb08c86be5f6\") " pod="openstack/nova-cell1-conductor-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.183540 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/982c5633-bbd8-4437-b8b5-11e3b6783e19-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.230780 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.243240 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 11:28:00 crc kubenswrapper[4797]: E0216 11:28:00.831159 4797 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91d6b29d9fdc5fb4c3d6b3de768efae3fa04636acf564856f99f86dfc27c43b7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 11:28:00 crc kubenswrapper[4797]: E0216 11:28:00.839016 4797 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91d6b29d9fdc5fb4c3d6b3de768efae3fa04636acf564856f99f86dfc27c43b7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 11:28:00 crc kubenswrapper[4797]: E0216 11:28:00.842669 4797 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91d6b29d9fdc5fb4c3d6b3de768efae3fa04636acf564856f99f86dfc27c43b7" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 11:28:00 crc kubenswrapper[4797]: E0216 11:28:00.842720 4797 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4f74c203-39da-4fe9-9cda-f5efdb0b5fad" containerName="nova-scheduler-scheduler" Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.851837 4797 generic.go:334] "Generic (PLEG): container finished" podID="3678dbc0-cd6b-4cf7-8695-d76da81e8107" containerID="ee276d4201415c718bc24f69b6f1de382ce31c3913b50e67d9ea99a29339e05c" exitCode=143 Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.852112 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3678dbc0-cd6b-4cf7-8695-d76da81e8107","Type":"ContainerDied","Data":"ee276d4201415c718bc24f69b6f1de382ce31c3913b50e67d9ea99a29339e05c"} Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.855499 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 11:28:00 crc kubenswrapper[4797]: W0216 11:28:00.937100 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30bb31d4_fd04_40bc_95f0_e3c7d5bb9a25.slice/crio-fa89dd7bbddd83cf092b9f853abb1b174504bbaeae34bc9e7073eb8e9ee9e29b WatchSource:0}: Error finding container fa89dd7bbddd83cf092b9f853abb1b174504bbaeae34bc9e7073eb8e9ee9e29b: Status 404 returned error can't find the container with id fa89dd7bbddd83cf092b9f853abb1b174504bbaeae34bc9e7073eb8e9ee9e29b Feb 16 11:28:00 crc kubenswrapper[4797]: I0216 11:28:00.939709 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:28:01 crc kubenswrapper[4797]: I0216 11:28:01.897027 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25","Type":"ContainerStarted","Data":"54343067cdb4568a66e9b29ceda85ab2f973e77160a1fe246dd3a75f0d24e091"} Feb 16 11:28:01 crc kubenswrapper[4797]: I0216 11:28:01.897344 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25","Type":"ContainerStarted","Data":"fa89dd7bbddd83cf092b9f853abb1b174504bbaeae34bc9e7073eb8e9ee9e29b"} Feb 16 11:28:01 crc kubenswrapper[4797]: I0216 11:28:01.898898 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"739194a8-bb4c-411b-ac3b-bb08c86be5f6","Type":"ContainerStarted","Data":"6f2801edb78937b7da8fca563d1016a12b5a9d62adc9e9497551ff7c6ecc45d9"} Feb 16 11:28:01 crc kubenswrapper[4797]: I0216 11:28:01.898929 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"739194a8-bb4c-411b-ac3b-bb08c86be5f6","Type":"ContainerStarted","Data":"c8b8acf09cdc02092bc97808a6e30695203e84323dcc16e9f7329a784b41e8a6"} Feb 16 11:28:01 crc kubenswrapper[4797]: I0216 11:28:01.899677 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 11:28:01 crc kubenswrapper[4797]: I0216 11:28:01.915391 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.915372019 podStartE2EDuration="2.915372019s" podCreationTimestamp="2026-02-16 11:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:28:01.913674922 +0000 UTC m=+1276.633859912" watchObservedRunningTime="2026-02-16 11:28:01.915372019 +0000 UTC m=+1276.635556999" Feb 16 11:28:02 crc kubenswrapper[4797]: I0216 11:28:02.912314 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25","Type":"ContainerStarted","Data":"72c7a02f43a55fca594a677c442307ed686cc62acf7e1aa3948a6878a41aad57"} Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.716258 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.762216 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3678dbc0-cd6b-4cf7-8695-d76da81e8107-combined-ca-bundle\") pod \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.762268 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5klkt\" (UniqueName: \"kubernetes.io/projected/3678dbc0-cd6b-4cf7-8695-d76da81e8107-kube-api-access-5klkt\") pod \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.762507 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3678dbc0-cd6b-4cf7-8695-d76da81e8107-config-data\") pod \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.762612 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3678dbc0-cd6b-4cf7-8695-d76da81e8107-logs\") pod \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\" (UID: \"3678dbc0-cd6b-4cf7-8695-d76da81e8107\") " Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.763131 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3678dbc0-cd6b-4cf7-8695-d76da81e8107-logs" (OuterVolumeSpecName: "logs") pod "3678dbc0-cd6b-4cf7-8695-d76da81e8107" (UID: "3678dbc0-cd6b-4cf7-8695-d76da81e8107"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.763386 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3678dbc0-cd6b-4cf7-8695-d76da81e8107-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.772525 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3678dbc0-cd6b-4cf7-8695-d76da81e8107-kube-api-access-5klkt" (OuterVolumeSpecName: "kube-api-access-5klkt") pod "3678dbc0-cd6b-4cf7-8695-d76da81e8107" (UID: "3678dbc0-cd6b-4cf7-8695-d76da81e8107"). InnerVolumeSpecName "kube-api-access-5klkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.820035 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3678dbc0-cd6b-4cf7-8695-d76da81e8107-config-data" (OuterVolumeSpecName: "config-data") pod "3678dbc0-cd6b-4cf7-8695-d76da81e8107" (UID: "3678dbc0-cd6b-4cf7-8695-d76da81e8107"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.825474 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3678dbc0-cd6b-4cf7-8695-d76da81e8107-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3678dbc0-cd6b-4cf7-8695-d76da81e8107" (UID: "3678dbc0-cd6b-4cf7-8695-d76da81e8107"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.867969 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3678dbc0-cd6b-4cf7-8695-d76da81e8107-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.868219 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5klkt\" (UniqueName: \"kubernetes.io/projected/3678dbc0-cd6b-4cf7-8695-d76da81e8107-kube-api-access-5klkt\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.868287 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3678dbc0-cd6b-4cf7-8695-d76da81e8107-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.924501 4797 generic.go:334] "Generic (PLEG): container finished" podID="3678dbc0-cd6b-4cf7-8695-d76da81e8107" containerID="92ad3d4d26371917919ce97bb64c7a51ddf96a9ba70dc23e17e1f4b8987905cc" exitCode=0 Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.924531 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.924551 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3678dbc0-cd6b-4cf7-8695-d76da81e8107","Type":"ContainerDied","Data":"92ad3d4d26371917919ce97bb64c7a51ddf96a9ba70dc23e17e1f4b8987905cc"} Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.925962 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3678dbc0-cd6b-4cf7-8695-d76da81e8107","Type":"ContainerDied","Data":"20f420d3a5e207b82f258740e3c7634e92a8f34854ab67b89b5b4f48d72ebfa2"} Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.925999 4797 scope.go:117] "RemoveContainer" containerID="92ad3d4d26371917919ce97bb64c7a51ddf96a9ba70dc23e17e1f4b8987905cc" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.928749 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25","Type":"ContainerStarted","Data":"ecbd0e41e77b5c33dc1ab668127d2fc61ad9cf9bdbfb3a69d4796900a347c5e0"} Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.931960 4797 generic.go:334] "Generic (PLEG): container finished" podID="4f74c203-39da-4fe9-9cda-f5efdb0b5fad" containerID="91d6b29d9fdc5fb4c3d6b3de768efae3fa04636acf564856f99f86dfc27c43b7" exitCode=0 Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.932014 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f74c203-39da-4fe9-9cda-f5efdb0b5fad","Type":"ContainerDied","Data":"91d6b29d9fdc5fb4c3d6b3de768efae3fa04636acf564856f99f86dfc27c43b7"} Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.946528 4797 scope.go:117] "RemoveContainer" containerID="ee276d4201415c718bc24f69b6f1de382ce31c3913b50e67d9ea99a29339e05c" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.972123 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.978697 4797 scope.go:117] "RemoveContainer" containerID="92ad3d4d26371917919ce97bb64c7a51ddf96a9ba70dc23e17e1f4b8987905cc" Feb 16 11:28:03 crc kubenswrapper[4797]: E0216 11:28:03.980166 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92ad3d4d26371917919ce97bb64c7a51ddf96a9ba70dc23e17e1f4b8987905cc\": container with ID starting with 92ad3d4d26371917919ce97bb64c7a51ddf96a9ba70dc23e17e1f4b8987905cc not found: ID does not exist" containerID="92ad3d4d26371917919ce97bb64c7a51ddf96a9ba70dc23e17e1f4b8987905cc" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.980205 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ad3d4d26371917919ce97bb64c7a51ddf96a9ba70dc23e17e1f4b8987905cc"} err="failed to get container status \"92ad3d4d26371917919ce97bb64c7a51ddf96a9ba70dc23e17e1f4b8987905cc\": rpc error: code = NotFound desc = could not find container \"92ad3d4d26371917919ce97bb64c7a51ddf96a9ba70dc23e17e1f4b8987905cc\": container with ID starting with 92ad3d4d26371917919ce97bb64c7a51ddf96a9ba70dc23e17e1f4b8987905cc not found: ID does not exist" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.980232 4797 scope.go:117] "RemoveContainer" containerID="ee276d4201415c718bc24f69b6f1de382ce31c3913b50e67d9ea99a29339e05c" Feb 16 11:28:03 crc kubenswrapper[4797]: E0216 11:28:03.980982 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee276d4201415c718bc24f69b6f1de382ce31c3913b50e67d9ea99a29339e05c\": container with ID starting with ee276d4201415c718bc24f69b6f1de382ce31c3913b50e67d9ea99a29339e05c not found: ID does not exist" containerID="ee276d4201415c718bc24f69b6f1de382ce31c3913b50e67d9ea99a29339e05c" Feb 16 11:28:03 crc kubenswrapper[4797]: I0216 11:28:03.981118 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee276d4201415c718bc24f69b6f1de382ce31c3913b50e67d9ea99a29339e05c"} err="failed to get container status \"ee276d4201415c718bc24f69b6f1de382ce31c3913b50e67d9ea99a29339e05c\": rpc error: code = NotFound desc = could not find container \"ee276d4201415c718bc24f69b6f1de382ce31c3913b50e67d9ea99a29339e05c\": container with ID starting with ee276d4201415c718bc24f69b6f1de382ce31c3913b50e67d9ea99a29339e05c not found: ID does not exist" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.005717 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.007808 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:04 crc kubenswrapper[4797]: E0216 11:28:04.008261 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3678dbc0-cd6b-4cf7-8695-d76da81e8107" containerName="nova-api-log" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.008282 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3678dbc0-cd6b-4cf7-8695-d76da81e8107" containerName="nova-api-log" Feb 16 11:28:04 crc kubenswrapper[4797]: E0216 11:28:04.008324 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3678dbc0-cd6b-4cf7-8695-d76da81e8107" containerName="nova-api-api" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.008333 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3678dbc0-cd6b-4cf7-8695-d76da81e8107" containerName="nova-api-api" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.008639 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="3678dbc0-cd6b-4cf7-8695-d76da81e8107" containerName="nova-api-log" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.008670 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="3678dbc0-cd6b-4cf7-8695-d76da81e8107" containerName="nova-api-api" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.009991 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.010307 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.013407 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.022419 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.075268 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-config-data\") pod \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\" (UID: \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\") " Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.075521 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-combined-ca-bundle\") pod \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\" (UID: \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\") " Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.075604 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twpc7\" (UniqueName: \"kubernetes.io/projected/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-kube-api-access-twpc7\") pod \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\" (UID: \"4f74c203-39da-4fe9-9cda-f5efdb0b5fad\") " Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.076387 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx7pz\" (UniqueName: \"kubernetes.io/projected/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-kube-api-access-hx7pz\") pod \"nova-api-0\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.076495 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.076569 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-config-data\") pod \"nova-api-0\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.077123 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-logs\") pod \"nova-api-0\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.106197 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-kube-api-access-twpc7" (OuterVolumeSpecName: "kube-api-access-twpc7") pod "4f74c203-39da-4fe9-9cda-f5efdb0b5fad" (UID: "4f74c203-39da-4fe9-9cda-f5efdb0b5fad"). InnerVolumeSpecName "kube-api-access-twpc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.133657 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-config-data" (OuterVolumeSpecName: "config-data") pod "4f74c203-39da-4fe9-9cda-f5efdb0b5fad" (UID: "4f74c203-39da-4fe9-9cda-f5efdb0b5fad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.150792 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f74c203-39da-4fe9-9cda-f5efdb0b5fad" (UID: "4f74c203-39da-4fe9-9cda-f5efdb0b5fad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.179752 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx7pz\" (UniqueName: \"kubernetes.io/projected/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-kube-api-access-hx7pz\") pod \"nova-api-0\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.180116 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.180176 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-config-data\") pod \"nova-api-0\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.180205 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-logs\") pod \"nova-api-0\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.180351 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.180365 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twpc7\" (UniqueName: \"kubernetes.io/projected/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-kube-api-access-twpc7\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.180378 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f74c203-39da-4fe9-9cda-f5efdb0b5fad-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.180808 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-logs\") pod \"nova-api-0\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.187433 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.188032 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-config-data\") pod \"nova-api-0\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.197183 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx7pz\" (UniqueName: \"kubernetes.io/projected/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-kube-api-access-hx7pz\") pod \"nova-api-0\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.326418 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.886977 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.970608 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69","Type":"ContainerStarted","Data":"3e9e7b440e16d184c64252916138fa9cd827eb020d2086d566403d8a2b3f9390"} Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.978751 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4f74c203-39da-4fe9-9cda-f5efdb0b5fad","Type":"ContainerDied","Data":"49188170be287389d34a9963e68cacae9e7d74a696a1008c697a0b8722354376"} Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.978832 4797 scope.go:117] "RemoveContainer" containerID="91d6b29d9fdc5fb4c3d6b3de768efae3fa04636acf564856f99f86dfc27c43b7" Feb 16 11:28:04 crc kubenswrapper[4797]: I0216 11:28:04.979080 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.121938 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.145456 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.166687 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:28:05 crc kubenswrapper[4797]: E0216 11:28:05.167420 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f74c203-39da-4fe9-9cda-f5efdb0b5fad" containerName="nova-scheduler-scheduler" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.167436 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f74c203-39da-4fe9-9cda-f5efdb0b5fad" containerName="nova-scheduler-scheduler" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.167679 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f74c203-39da-4fe9-9cda-f5efdb0b5fad" containerName="nova-scheduler-scheduler" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.168661 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.171704 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.188978 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.203485 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7231a84-883c-463c-958e-0d2222057f5e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b7231a84-883c-463c-958e-0d2222057f5e\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.203541 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7231a84-883c-463c-958e-0d2222057f5e-config-data\") pod \"nova-scheduler-0\" (UID: \"b7231a84-883c-463c-958e-0d2222057f5e\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.203646 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjg4x\" (UniqueName: \"kubernetes.io/projected/b7231a84-883c-463c-958e-0d2222057f5e-kube-api-access-gjg4x\") pod \"nova-scheduler-0\" (UID: \"b7231a84-883c-463c-958e-0d2222057f5e\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.305019 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjg4x\" (UniqueName: \"kubernetes.io/projected/b7231a84-883c-463c-958e-0d2222057f5e-kube-api-access-gjg4x\") pod \"nova-scheduler-0\" (UID: \"b7231a84-883c-463c-958e-0d2222057f5e\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.305307 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7231a84-883c-463c-958e-0d2222057f5e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b7231a84-883c-463c-958e-0d2222057f5e\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.305374 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7231a84-883c-463c-958e-0d2222057f5e-config-data\") pod \"nova-scheduler-0\" (UID: \"b7231a84-883c-463c-958e-0d2222057f5e\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.313405 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7231a84-883c-463c-958e-0d2222057f5e-config-data\") pod \"nova-scheduler-0\" (UID: \"b7231a84-883c-463c-958e-0d2222057f5e\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.313444 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7231a84-883c-463c-958e-0d2222057f5e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b7231a84-883c-463c-958e-0d2222057f5e\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.329178 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjg4x\" (UniqueName: \"kubernetes.io/projected/b7231a84-883c-463c-958e-0d2222057f5e-kube-api-access-gjg4x\") pod \"nova-scheduler-0\" (UID: \"b7231a84-883c-463c-958e-0d2222057f5e\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:05 crc kubenswrapper[4797]: I0216 11:28:05.509166 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 11:28:06 crc kubenswrapper[4797]: I0216 11:28:06.012077 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3678dbc0-cd6b-4cf7-8695-d76da81e8107" path="/var/lib/kubelet/pods/3678dbc0-cd6b-4cf7-8695-d76da81e8107/volumes" Feb 16 11:28:06 crc kubenswrapper[4797]: I0216 11:28:06.013029 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f74c203-39da-4fe9-9cda-f5efdb0b5fad" path="/var/lib/kubelet/pods/4f74c203-39da-4fe9-9cda-f5efdb0b5fad/volumes" Feb 16 11:28:06 crc kubenswrapper[4797]: I0216 11:28:06.013552 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 11:28:06 crc kubenswrapper[4797]: I0216 11:28:06.013681 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25","Type":"ContainerStarted","Data":"717660f9f3784c11ec811d5be9ce63291f0bb729ab750414baf44ef6266c4564"} Feb 16 11:28:06 crc kubenswrapper[4797]: I0216 11:28:06.013701 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69","Type":"ContainerStarted","Data":"237fbd2c9f9ab1c670889ba49ec97708600ce7b0572f90b6f5974e875411ed9c"} Feb 16 11:28:06 crc kubenswrapper[4797]: I0216 11:28:06.013714 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69","Type":"ContainerStarted","Data":"7bf93c70b7d726953582461b72d7965bee44cca65b399bd8e14ab1bc532606e4"} Feb 16 11:28:06 crc kubenswrapper[4797]: I0216 11:28:06.050873 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:28:06 crc kubenswrapper[4797]: I0216 11:28:06.084906 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.08488384 podStartE2EDuration="3.08488384s" podCreationTimestamp="2026-02-16 11:28:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:28:06.072212224 +0000 UTC m=+1280.792397204" watchObservedRunningTime="2026-02-16 11:28:06.08488384 +0000 UTC m=+1280.805068820" Feb 16 11:28:06 crc kubenswrapper[4797]: I0216 11:28:06.111426 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.026292192 podStartE2EDuration="7.111404716s" podCreationTimestamp="2026-02-16 11:27:59 +0000 UTC" firstStartedPulling="2026-02-16 11:28:00.940675801 +0000 UTC m=+1275.660860781" lastFinishedPulling="2026-02-16 11:28:05.025788325 +0000 UTC m=+1279.745973305" observedRunningTime="2026-02-16 11:28:06.106482261 +0000 UTC m=+1280.826667241" watchObservedRunningTime="2026-02-16 11:28:06.111404716 +0000 UTC m=+1280.831589696" Feb 16 11:28:06 crc kubenswrapper[4797]: I0216 11:28:06.185383 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 11:28:07 crc kubenswrapper[4797]: I0216 11:28:07.010347 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b7231a84-883c-463c-958e-0d2222057f5e","Type":"ContainerStarted","Data":"332b9d5178a1bc554c8f8fda345853e496aeb98b5e32eabc49e59e2a326e7d5c"} Feb 16 11:28:07 crc kubenswrapper[4797]: I0216 11:28:07.010682 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b7231a84-883c-463c-958e-0d2222057f5e","Type":"ContainerStarted","Data":"dd8392c6e29f06135d51a0addff91135aa72f2756d64e1286bafbc16f944028f"} Feb 16 11:28:07 crc kubenswrapper[4797]: I0216 11:28:07.043869 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.043847528 podStartE2EDuration="2.043847528s" podCreationTimestamp="2026-02-16 11:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:28:07.02415841 +0000 UTC m=+1281.744343390" watchObservedRunningTime="2026-02-16 11:28:07.043847528 +0000 UTC m=+1281.764032518" Feb 16 11:28:10 crc kubenswrapper[4797]: I0216 11:28:10.270752 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 11:28:10 crc kubenswrapper[4797]: I0216 11:28:10.509258 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 11:28:13 crc kubenswrapper[4797]: E0216 11:28:13.985901 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:28:14 crc kubenswrapper[4797]: I0216 11:28:14.327152 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 11:28:14 crc kubenswrapper[4797]: I0216 11:28:14.327221 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 11:28:15 crc kubenswrapper[4797]: I0216 11:28:15.411810 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 11:28:15 crc kubenswrapper[4797]: I0216 11:28:15.411789 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 11:28:15 crc kubenswrapper[4797]: I0216 11:28:15.510083 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 11:28:15 crc kubenswrapper[4797]: I0216 11:28:15.551233 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 11:28:16 crc kubenswrapper[4797]: I0216 11:28:16.158499 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.178993 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.182432 4797 generic.go:334] "Generic (PLEG): container finished" podID="3f4a570e-f010-42e7-8d2e-56d6e6777640" containerID="1a676fee48778595e9378840fdd12face5428ed3573778b790b377499607b24e" exitCode=137 Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.182495 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f4a570e-f010-42e7-8d2e-56d6e6777640","Type":"ContainerDied","Data":"1a676fee48778595e9378840fdd12face5428ed3573778b790b377499607b24e"} Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.182521 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3f4a570e-f010-42e7-8d2e-56d6e6777640","Type":"ContainerDied","Data":"cb72c429efe48e42571c1d2f7ce403aa81cbfd59141e43cde9a2d1483b16694a"} Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.182540 4797 scope.go:117] "RemoveContainer" containerID="1a676fee48778595e9378840fdd12face5428ed3573778b790b377499607b24e" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.182545 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.192915 4797 generic.go:334] "Generic (PLEG): container finished" podID="b635730e-ec75-48d6-b0eb-0c74bfa7ea0d" containerID="2374e0b87252c78fc1a3717a35a9256f740d933a57dbf1750bc32f70a3ac2f71" exitCode=137 Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.192963 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d","Type":"ContainerDied","Data":"2374e0b87252c78fc1a3717a35a9256f740d933a57dbf1750bc32f70a3ac2f71"} Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.192992 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d","Type":"ContainerDied","Data":"a83d9716258350b60c10b26bc0bb7ab6ecf567a7ed7d2aaddd868a46a5e361be"} Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.193006 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a83d9716258350b60c10b26bc0bb7ab6ecf567a7ed7d2aaddd868a46a5e361be" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.193079 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.214818 4797 scope.go:117] "RemoveContainer" containerID="8fc379a06f1667507a54eaac6df20e3324234a5c8a2fa27a30a7fec2a392abc4" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.252442 4797 scope.go:117] "RemoveContainer" containerID="1a676fee48778595e9378840fdd12face5428ed3573778b790b377499607b24e" Feb 16 11:28:22 crc kubenswrapper[4797]: E0216 11:28:22.252957 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a676fee48778595e9378840fdd12face5428ed3573778b790b377499607b24e\": container with ID starting with 1a676fee48778595e9378840fdd12face5428ed3573778b790b377499607b24e not found: ID does not exist" containerID="1a676fee48778595e9378840fdd12face5428ed3573778b790b377499607b24e" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.252991 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a676fee48778595e9378840fdd12face5428ed3573778b790b377499607b24e"} err="failed to get container status \"1a676fee48778595e9378840fdd12face5428ed3573778b790b377499607b24e\": rpc error: code = NotFound desc = could not find container \"1a676fee48778595e9378840fdd12face5428ed3573778b790b377499607b24e\": container with ID starting with 1a676fee48778595e9378840fdd12face5428ed3573778b790b377499607b24e not found: ID does not exist" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.253015 4797 scope.go:117] "RemoveContainer" containerID="8fc379a06f1667507a54eaac6df20e3324234a5c8a2fa27a30a7fec2a392abc4" Feb 16 11:28:22 crc kubenswrapper[4797]: E0216 11:28:22.253402 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc379a06f1667507a54eaac6df20e3324234a5c8a2fa27a30a7fec2a392abc4\": container with ID starting with 8fc379a06f1667507a54eaac6df20e3324234a5c8a2fa27a30a7fec2a392abc4 not found: ID does not exist" containerID="8fc379a06f1667507a54eaac6df20e3324234a5c8a2fa27a30a7fec2a392abc4" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.253422 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc379a06f1667507a54eaac6df20e3324234a5c8a2fa27a30a7fec2a392abc4"} err="failed to get container status \"8fc379a06f1667507a54eaac6df20e3324234a5c8a2fa27a30a7fec2a392abc4\": rpc error: code = NotFound desc = could not find container \"8fc379a06f1667507a54eaac6df20e3324234a5c8a2fa27a30a7fec2a392abc4\": container with ID starting with 8fc379a06f1667507a54eaac6df20e3324234a5c8a2fa27a30a7fec2a392abc4 not found: ID does not exist" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.378195 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4a570e-f010-42e7-8d2e-56d6e6777640-combined-ca-bundle\") pod \"3f4a570e-f010-42e7-8d2e-56d6e6777640\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.378298 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f4a570e-f010-42e7-8d2e-56d6e6777640-logs\") pod \"3f4a570e-f010-42e7-8d2e-56d6e6777640\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.378347 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-combined-ca-bundle\") pod \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\" (UID: \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\") " Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.378388 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84glt\" (UniqueName: \"kubernetes.io/projected/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-kube-api-access-84glt\") pod \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\" (UID: \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\") " Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.379136 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p6dq\" (UniqueName: \"kubernetes.io/projected/3f4a570e-f010-42e7-8d2e-56d6e6777640-kube-api-access-6p6dq\") pod \"3f4a570e-f010-42e7-8d2e-56d6e6777640\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.379240 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4a570e-f010-42e7-8d2e-56d6e6777640-config-data\") pod \"3f4a570e-f010-42e7-8d2e-56d6e6777640\" (UID: \"3f4a570e-f010-42e7-8d2e-56d6e6777640\") " Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.379341 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-config-data\") pod \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\" (UID: \"b635730e-ec75-48d6-b0eb-0c74bfa7ea0d\") " Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.379164 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4a570e-f010-42e7-8d2e-56d6e6777640-logs" (OuterVolumeSpecName: "logs") pod "3f4a570e-f010-42e7-8d2e-56d6e6777640" (UID: "3f4a570e-f010-42e7-8d2e-56d6e6777640"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.379983 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f4a570e-f010-42e7-8d2e-56d6e6777640-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.384444 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4a570e-f010-42e7-8d2e-56d6e6777640-kube-api-access-6p6dq" (OuterVolumeSpecName: "kube-api-access-6p6dq") pod "3f4a570e-f010-42e7-8d2e-56d6e6777640" (UID: "3f4a570e-f010-42e7-8d2e-56d6e6777640"). InnerVolumeSpecName "kube-api-access-6p6dq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.384951 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-kube-api-access-84glt" (OuterVolumeSpecName: "kube-api-access-84glt") pod "b635730e-ec75-48d6-b0eb-0c74bfa7ea0d" (UID: "b635730e-ec75-48d6-b0eb-0c74bfa7ea0d"). InnerVolumeSpecName "kube-api-access-84glt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.410237 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-config-data" (OuterVolumeSpecName: "config-data") pod "b635730e-ec75-48d6-b0eb-0c74bfa7ea0d" (UID: "b635730e-ec75-48d6-b0eb-0c74bfa7ea0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.413757 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4a570e-f010-42e7-8d2e-56d6e6777640-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f4a570e-f010-42e7-8d2e-56d6e6777640" (UID: "3f4a570e-f010-42e7-8d2e-56d6e6777640"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.416003 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4a570e-f010-42e7-8d2e-56d6e6777640-config-data" (OuterVolumeSpecName: "config-data") pod "3f4a570e-f010-42e7-8d2e-56d6e6777640" (UID: "3f4a570e-f010-42e7-8d2e-56d6e6777640"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.419831 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b635730e-ec75-48d6-b0eb-0c74bfa7ea0d" (UID: "b635730e-ec75-48d6-b0eb-0c74bfa7ea0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.482798 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84glt\" (UniqueName: \"kubernetes.io/projected/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-kube-api-access-84glt\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.482842 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p6dq\" (UniqueName: \"kubernetes.io/projected/3f4a570e-f010-42e7-8d2e-56d6e6777640-kube-api-access-6p6dq\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.482861 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4a570e-f010-42e7-8d2e-56d6e6777640-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.482879 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.482898 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4a570e-f010-42e7-8d2e-56d6e6777640-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.482914 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.528549 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.541441 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.562405 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:28:22 crc kubenswrapper[4797]: E0216 11:28:22.563721 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4a570e-f010-42e7-8d2e-56d6e6777640" containerName="nova-metadata-metadata" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.563746 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4a570e-f010-42e7-8d2e-56d6e6777640" containerName="nova-metadata-metadata" Feb 16 11:28:22 crc kubenswrapper[4797]: E0216 11:28:22.563800 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b635730e-ec75-48d6-b0eb-0c74bfa7ea0d" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.563809 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="b635730e-ec75-48d6-b0eb-0c74bfa7ea0d" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 11:28:22 crc kubenswrapper[4797]: E0216 11:28:22.563832 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4a570e-f010-42e7-8d2e-56d6e6777640" containerName="nova-metadata-log" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.563840 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4a570e-f010-42e7-8d2e-56d6e6777640" containerName="nova-metadata-log" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.564323 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4a570e-f010-42e7-8d2e-56d6e6777640" containerName="nova-metadata-metadata" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.564343 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4a570e-f010-42e7-8d2e-56d6e6777640" containerName="nova-metadata-log" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.564360 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="b635730e-ec75-48d6-b0eb-0c74bfa7ea0d" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.568526 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.576174 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.576569 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.602113 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.688941 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmhv4\" (UniqueName: \"kubernetes.io/projected/1a2493bc-7d20-447c-9b50-1b0a283ae30e-kube-api-access-qmhv4\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.689041 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.689064 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a2493bc-7d20-447c-9b50-1b0a283ae30e-logs\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.689091 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-config-data\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.689159 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.791363 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmhv4\" (UniqueName: \"kubernetes.io/projected/1a2493bc-7d20-447c-9b50-1b0a283ae30e-kube-api-access-qmhv4\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.791513 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.791535 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a2493bc-7d20-447c-9b50-1b0a283ae30e-logs\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.791567 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-config-data\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.791637 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.792208 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a2493bc-7d20-447c-9b50-1b0a283ae30e-logs\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.797521 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.800439 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-config-data\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.808954 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.813204 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmhv4\" (UniqueName: \"kubernetes.io/projected/1a2493bc-7d20-447c-9b50-1b0a283ae30e-kube-api-access-qmhv4\") pod \"nova-metadata-0\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " pod="openstack/nova-metadata-0" Feb 16 11:28:22 crc kubenswrapper[4797]: I0216 11:28:22.897344 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.205709 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.253578 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.266833 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.284237 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.286002 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.289022 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.289226 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.289356 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.293850 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.305935 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b260ffc6-7065-4f58-8e23-f5b5367123c6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.306046 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd8xq\" (UniqueName: \"kubernetes.io/projected/b260ffc6-7065-4f58-8e23-f5b5367123c6-kube-api-access-cd8xq\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.306169 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b260ffc6-7065-4f58-8e23-f5b5367123c6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.306275 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b260ffc6-7065-4f58-8e23-f5b5367123c6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.306304 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b260ffc6-7065-4f58-8e23-f5b5367123c6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: W0216 11:28:23.371598 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1a2493bc_7d20_447c_9b50_1b0a283ae30e.slice/crio-74de123b3d02be3de614d1f5ece04ebf7c2b7beb8541aabea44ef64d4577999c WatchSource:0}: Error finding container 74de123b3d02be3de614d1f5ece04ebf7c2b7beb8541aabea44ef64d4577999c: Status 404 returned error can't find the container with id 74de123b3d02be3de614d1f5ece04ebf7c2b7beb8541aabea44ef64d4577999c Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.375940 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.407128 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b260ffc6-7065-4f58-8e23-f5b5367123c6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.407226 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b260ffc6-7065-4f58-8e23-f5b5367123c6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.407250 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b260ffc6-7065-4f58-8e23-f5b5367123c6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.407274 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b260ffc6-7065-4f58-8e23-f5b5367123c6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.407321 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd8xq\" (UniqueName: \"kubernetes.io/projected/b260ffc6-7065-4f58-8e23-f5b5367123c6-kube-api-access-cd8xq\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.412807 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b260ffc6-7065-4f58-8e23-f5b5367123c6-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.412854 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b260ffc6-7065-4f58-8e23-f5b5367123c6-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.417104 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b260ffc6-7065-4f58-8e23-f5b5367123c6-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.417472 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b260ffc6-7065-4f58-8e23-f5b5367123c6-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.428290 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd8xq\" (UniqueName: \"kubernetes.io/projected/b260ffc6-7065-4f58-8e23-f5b5367123c6-kube-api-access-cd8xq\") pod \"nova-cell1-novncproxy-0\" (UID: \"b260ffc6-7065-4f58-8e23-f5b5367123c6\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:23 crc kubenswrapper[4797]: I0216 11:28:23.610215 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:24 crc kubenswrapper[4797]: I0216 11:28:23.999838 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4a570e-f010-42e7-8d2e-56d6e6777640" path="/var/lib/kubelet/pods/3f4a570e-f010-42e7-8d2e-56d6e6777640/volumes" Feb 16 11:28:24 crc kubenswrapper[4797]: I0216 11:28:24.000767 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b635730e-ec75-48d6-b0eb-0c74bfa7ea0d" path="/var/lib/kubelet/pods/b635730e-ec75-48d6-b0eb-0c74bfa7ea0d/volumes" Feb 16 11:28:24 crc kubenswrapper[4797]: I0216 11:28:24.079716 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 11:28:24 crc kubenswrapper[4797]: W0216 11:28:24.081850 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb260ffc6_7065_4f58_8e23_f5b5367123c6.slice/crio-23e4e759a38def1b933f0ce4b5b3c68d5072f77b6473cba63dd4f88032c8a3bb WatchSource:0}: Error finding container 23e4e759a38def1b933f0ce4b5b3c68d5072f77b6473cba63dd4f88032c8a3bb: Status 404 returned error can't find the container with id 23e4e759a38def1b933f0ce4b5b3c68d5072f77b6473cba63dd4f88032c8a3bb Feb 16 11:28:24 crc kubenswrapper[4797]: I0216 11:28:24.217447 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b260ffc6-7065-4f58-8e23-f5b5367123c6","Type":"ContainerStarted","Data":"23e4e759a38def1b933f0ce4b5b3c68d5072f77b6473cba63dd4f88032c8a3bb"} Feb 16 11:28:24 crc kubenswrapper[4797]: I0216 11:28:24.220079 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1a2493bc-7d20-447c-9b50-1b0a283ae30e","Type":"ContainerStarted","Data":"28d91196f3cf2a352faa86228893fdf0f036269b941943fb60b5e051ef7714cb"} Feb 16 11:28:24 crc kubenswrapper[4797]: I0216 11:28:24.220104 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1a2493bc-7d20-447c-9b50-1b0a283ae30e","Type":"ContainerStarted","Data":"4314bf0e0446459c6debb8cc6f37c87b3e707df9942e52bbc90edef677355b97"} Feb 16 11:28:24 crc kubenswrapper[4797]: I0216 11:28:24.220115 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1a2493bc-7d20-447c-9b50-1b0a283ae30e","Type":"ContainerStarted","Data":"74de123b3d02be3de614d1f5ece04ebf7c2b7beb8541aabea44ef64d4577999c"} Feb 16 11:28:24 crc kubenswrapper[4797]: I0216 11:28:24.251316 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.251294109 podStartE2EDuration="2.251294109s" podCreationTimestamp="2026-02-16 11:28:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:28:24.248905054 +0000 UTC m=+1298.969090034" watchObservedRunningTime="2026-02-16 11:28:24.251294109 +0000 UTC m=+1298.971479089" Feb 16 11:28:24 crc kubenswrapper[4797]: I0216 11:28:24.335621 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 11:28:24 crc kubenswrapper[4797]: I0216 11:28:24.336850 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 11:28:24 crc kubenswrapper[4797]: I0216 11:28:24.343373 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 11:28:24 crc kubenswrapper[4797]: I0216 11:28:24.440978 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.229597 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b260ffc6-7065-4f58-8e23-f5b5367123c6","Type":"ContainerStarted","Data":"2cd733c33cef1388ba98178a6ac0270f5f419b25164a6b5e980af891edd35ef8"} Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.230160 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.233478 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.254399 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.254375342 podStartE2EDuration="2.254375342s" podCreationTimestamp="2026-02-16 11:28:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:28:25.246084526 +0000 UTC m=+1299.966269506" watchObservedRunningTime="2026-02-16 11:28:25.254375342 +0000 UTC m=+1299.974560322" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.461740 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-h2nsb"] Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.463550 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.473258 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-h2nsb"] Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.553810 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq2cd\" (UniqueName: \"kubernetes.io/projected/075067a8-6831-4f7d-ad0b-7ee700dc165e-kube-api-access-lq2cd\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.553891 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.553920 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.553975 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.554010 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-config\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.554054 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.656240 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq2cd\" (UniqueName: \"kubernetes.io/projected/075067a8-6831-4f7d-ad0b-7ee700dc165e-kube-api-access-lq2cd\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.656347 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.656383 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.656451 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.656497 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-config\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.656552 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.657476 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.657481 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.657692 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.658851 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-config\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.659518 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/075067a8-6831-4f7d-ad0b-7ee700dc165e-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.691609 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq2cd\" (UniqueName: \"kubernetes.io/projected/075067a8-6831-4f7d-ad0b-7ee700dc165e-kube-api-access-lq2cd\") pod \"dnsmasq-dns-89c5cd4d5-h2nsb\" (UID: \"075067a8-6831-4f7d-ad0b-7ee700dc165e\") " pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:25 crc kubenswrapper[4797]: I0216 11:28:25.803482 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:26 crc kubenswrapper[4797]: I0216 11:28:26.441015 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-h2nsb"] Feb 16 11:28:27 crc kubenswrapper[4797]: I0216 11:28:27.253377 4797 generic.go:334] "Generic (PLEG): container finished" podID="075067a8-6831-4f7d-ad0b-7ee700dc165e" containerID="76cc65e55b658aabeaa1ec6529ffbe9f382e7d6757a0355d55d9293a1e49bb7e" exitCode=0 Feb 16 11:28:27 crc kubenswrapper[4797]: I0216 11:28:27.255572 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" event={"ID":"075067a8-6831-4f7d-ad0b-7ee700dc165e","Type":"ContainerDied","Data":"76cc65e55b658aabeaa1ec6529ffbe9f382e7d6757a0355d55d9293a1e49bb7e"} Feb 16 11:28:27 crc kubenswrapper[4797]: I0216 11:28:27.255635 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" event={"ID":"075067a8-6831-4f7d-ad0b-7ee700dc165e","Type":"ContainerStarted","Data":"23204d1c696a7e0132bc459393fbc230cf74060119126adc8ff90ec2b79e1c3f"} Feb 16 11:28:27 crc kubenswrapper[4797]: I0216 11:28:27.789510 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:27 crc kubenswrapper[4797]: I0216 11:28:27.899905 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 11:28:27 crc kubenswrapper[4797]: I0216 11:28:27.900984 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 11:28:28 crc kubenswrapper[4797]: E0216 11:28:28.138383 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:28:28 crc kubenswrapper[4797]: E0216 11:28:28.138437 4797 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:28:28 crc kubenswrapper[4797]: E0216 11:28:28.138546 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fvxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dhgrw_openstack(895bed8d-c376-47ad-8fa6-3cf0f07399c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 11:28:28 crc kubenswrapper[4797]: E0216 11:28:28.139810 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:28:28 crc kubenswrapper[4797]: I0216 11:28:28.145377 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:28:28 crc kubenswrapper[4797]: I0216 11:28:28.145774 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="ceilometer-central-agent" containerID="cri-o://54343067cdb4568a66e9b29ceda85ab2f973e77160a1fe246dd3a75f0d24e091" gracePeriod=30 Feb 16 11:28:28 crc kubenswrapper[4797]: I0216 11:28:28.145925 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="proxy-httpd" containerID="cri-o://717660f9f3784c11ec811d5be9ce63291f0bb729ab750414baf44ef6266c4564" gracePeriod=30 Feb 16 11:28:28 crc kubenswrapper[4797]: I0216 11:28:28.145987 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="sg-core" containerID="cri-o://ecbd0e41e77b5c33dc1ab668127d2fc61ad9cf9bdbfb3a69d4796900a347c5e0" gracePeriod=30 Feb 16 11:28:28 crc kubenswrapper[4797]: I0216 11:28:28.146038 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="ceilometer-notification-agent" containerID="cri-o://72c7a02f43a55fca594a677c442307ed686cc62acf7e1aa3948a6878a41aad57" gracePeriod=30 Feb 16 11:28:28 crc kubenswrapper[4797]: I0216 11:28:28.156155 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.213:3000/\": read tcp 10.217.0.2:43200->10.217.0.213:3000: read: connection reset by peer" Feb 16 11:28:28 crc kubenswrapper[4797]: I0216 11:28:28.265081 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" containerName="nova-api-log" containerID="cri-o://7bf93c70b7d726953582461b72d7965bee44cca65b399bd8e14ab1bc532606e4" gracePeriod=30 Feb 16 11:28:28 crc kubenswrapper[4797]: I0216 11:28:28.266181 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" event={"ID":"075067a8-6831-4f7d-ad0b-7ee700dc165e","Type":"ContainerStarted","Data":"262a344a76424219a2eb79ae388b065c62c0b6c7b870568853666be707bc838b"} Feb 16 11:28:28 crc kubenswrapper[4797]: I0216 11:28:28.266216 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:28 crc kubenswrapper[4797]: I0216 11:28:28.267051 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" containerName="nova-api-api" containerID="cri-o://237fbd2c9f9ab1c670889ba49ec97708600ce7b0572f90b6f5974e875411ed9c" gracePeriod=30 Feb 16 11:28:28 crc kubenswrapper[4797]: I0216 11:28:28.294075 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" podStartSLOduration=3.294047865 podStartE2EDuration="3.294047865s" podCreationTimestamp="2026-02-16 11:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:28:28.285320607 +0000 UTC m=+1303.005505577" watchObservedRunningTime="2026-02-16 11:28:28.294047865 +0000 UTC m=+1303.014232845" Feb 16 11:28:28 crc kubenswrapper[4797]: I0216 11:28:28.610777 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:29 crc kubenswrapper[4797]: I0216 11:28:29.275076 4797 generic.go:334] "Generic (PLEG): container finished" podID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerID="717660f9f3784c11ec811d5be9ce63291f0bb729ab750414baf44ef6266c4564" exitCode=0 Feb 16 11:28:29 crc kubenswrapper[4797]: I0216 11:28:29.275111 4797 generic.go:334] "Generic (PLEG): container finished" podID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerID="ecbd0e41e77b5c33dc1ab668127d2fc61ad9cf9bdbfb3a69d4796900a347c5e0" exitCode=2 Feb 16 11:28:29 crc kubenswrapper[4797]: I0216 11:28:29.275121 4797 generic.go:334] "Generic (PLEG): container finished" podID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerID="54343067cdb4568a66e9b29ceda85ab2f973e77160a1fe246dd3a75f0d24e091" exitCode=0 Feb 16 11:28:29 crc kubenswrapper[4797]: I0216 11:28:29.275143 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25","Type":"ContainerDied","Data":"717660f9f3784c11ec811d5be9ce63291f0bb729ab750414baf44ef6266c4564"} Feb 16 11:28:29 crc kubenswrapper[4797]: I0216 11:28:29.275178 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25","Type":"ContainerDied","Data":"ecbd0e41e77b5c33dc1ab668127d2fc61ad9cf9bdbfb3a69d4796900a347c5e0"} Feb 16 11:28:29 crc kubenswrapper[4797]: I0216 11:28:29.275190 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25","Type":"ContainerDied","Data":"54343067cdb4568a66e9b29ceda85ab2f973e77160a1fe246dd3a75f0d24e091"} Feb 16 11:28:29 crc kubenswrapper[4797]: I0216 11:28:29.277176 4797 generic.go:334] "Generic (PLEG): container finished" podID="c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" containerID="7bf93c70b7d726953582461b72d7965bee44cca65b399bd8e14ab1bc532606e4" exitCode=143 Feb 16 11:28:29 crc kubenswrapper[4797]: I0216 11:28:29.277225 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69","Type":"ContainerDied","Data":"7bf93c70b7d726953582461b72d7965bee44cca65b399bd8e14ab1bc532606e4"} Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.307872 4797 generic.go:334] "Generic (PLEG): container finished" podID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerID="72c7a02f43a55fca594a677c442307ed686cc62acf7e1aa3948a6878a41aad57" exitCode=0 Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.308160 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25","Type":"ContainerDied","Data":"72c7a02f43a55fca594a677c442307ed686cc62acf7e1aa3948a6878a41aad57"} Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.633803 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.794894 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-log-httpd\") pod \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.794988 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-combined-ca-bundle\") pod \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.795078 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-568jv\" (UniqueName: \"kubernetes.io/projected/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-kube-api-access-568jv\") pod \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.795132 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-run-httpd\") pod \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.795219 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-scripts\") pod \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.795295 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-ceilometer-tls-certs\") pod \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.795324 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-config-data\") pod \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.795399 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-sg-core-conf-yaml\") pod \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\" (UID: \"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25\") " Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.797284 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" (UID: "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.797453 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" (UID: "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.801553 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-scripts" (OuterVolumeSpecName: "scripts") pod "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" (UID: "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.819825 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-kube-api-access-568jv" (OuterVolumeSpecName: "kube-api-access-568jv") pod "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" (UID: "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25"). InnerVolumeSpecName "kube-api-access-568jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.839112 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" (UID: "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.857288 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" (UID: "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.880153 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" (UID: "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.897222 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-568jv\" (UniqueName: \"kubernetes.io/projected/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-kube-api-access-568jv\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.897258 4797 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.897269 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.897297 4797 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.897309 4797 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.897322 4797 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.897333 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.918439 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-config-data" (OuterVolumeSpecName: "config-data") pod "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" (UID: "30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:30 crc kubenswrapper[4797]: I0216 11:28:30.999177 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.320819 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25","Type":"ContainerDied","Data":"fa89dd7bbddd83cf092b9f853abb1b174504bbaeae34bc9e7073eb8e9ee9e29b"} Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.320885 4797 scope.go:117] "RemoveContainer" containerID="717660f9f3784c11ec811d5be9ce63291f0bb729ab750414baf44ef6266c4564" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.320886 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.365674 4797 scope.go:117] "RemoveContainer" containerID="ecbd0e41e77b5c33dc1ab668127d2fc61ad9cf9bdbfb3a69d4796900a347c5e0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.376619 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.393373 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.400869 4797 scope.go:117] "RemoveContainer" containerID="72c7a02f43a55fca594a677c442307ed686cc62acf7e1aa3948a6878a41aad57" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.411567 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:28:31 crc kubenswrapper[4797]: E0216 11:28:31.412118 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="ceilometer-central-agent" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.412145 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="ceilometer-central-agent" Feb 16 11:28:31 crc kubenswrapper[4797]: E0216 11:28:31.412161 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="sg-core" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.412169 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="sg-core" Feb 16 11:28:31 crc kubenswrapper[4797]: E0216 11:28:31.412187 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="ceilometer-notification-agent" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.412195 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="ceilometer-notification-agent" Feb 16 11:28:31 crc kubenswrapper[4797]: E0216 11:28:31.412226 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="proxy-httpd" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.412235 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="proxy-httpd" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.412487 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="sg-core" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.412517 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="ceilometer-notification-agent" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.412531 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="proxy-httpd" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.412556 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="ceilometer-central-agent" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.415094 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.417855 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.418294 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.419150 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.429465 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.432300 4797 scope.go:117] "RemoveContainer" containerID="54343067cdb4568a66e9b29ceda85ab2f973e77160a1fe246dd3a75f0d24e091" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.610436 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25bc0b36-a550-45a1-9632-088bfd0b2249-run-httpd\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.610963 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvx7g\" (UniqueName: \"kubernetes.io/projected/25bc0b36-a550-45a1-9632-088bfd0b2249-kube-api-access-qvx7g\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.611316 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.611406 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.611527 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25bc0b36-a550-45a1-9632-088bfd0b2249-log-httpd\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.611725 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-scripts\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.611855 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.611907 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-config-data\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.713421 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.713491 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-config-data\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.713533 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25bc0b36-a550-45a1-9632-088bfd0b2249-run-httpd\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.713620 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvx7g\" (UniqueName: \"kubernetes.io/projected/25bc0b36-a550-45a1-9632-088bfd0b2249-kube-api-access-qvx7g\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.713716 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.713737 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.713796 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25bc0b36-a550-45a1-9632-088bfd0b2249-log-httpd\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.713854 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-scripts\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.717890 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25bc0b36-a550-45a1-9632-088bfd0b2249-log-httpd\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.718768 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/25bc0b36-a550-45a1-9632-088bfd0b2249-run-httpd\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.719881 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-scripts\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.720194 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.721303 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.726479 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.727135 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25bc0b36-a550-45a1-9632-088bfd0b2249-config-data\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.731875 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvx7g\" (UniqueName: \"kubernetes.io/projected/25bc0b36-a550-45a1-9632-088bfd0b2249-kube-api-access-qvx7g\") pod \"ceilometer-0\" (UID: \"25bc0b36-a550-45a1-9632-088bfd0b2249\") " pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.750553 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 11:28:31 crc kubenswrapper[4797]: I0216 11:28:31.908431 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.001441 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" path="/var/lib/kubelet/pods/30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25/volumes" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.020474 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-config-data\") pod \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.020714 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-logs\") pod \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.020799 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx7pz\" (UniqueName: \"kubernetes.io/projected/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-kube-api-access-hx7pz\") pod \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.020858 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-combined-ca-bundle\") pod \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\" (UID: \"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69\") " Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.027149 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-logs" (OuterVolumeSpecName: "logs") pod "c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" (UID: "c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.028083 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-kube-api-access-hx7pz" (OuterVolumeSpecName: "kube-api-access-hx7pz") pod "c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" (UID: "c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69"). InnerVolumeSpecName "kube-api-access-hx7pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.068818 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" (UID: "c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.080746 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-config-data" (OuterVolumeSpecName: "config-data") pod "c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" (UID: "c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.126012 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.126054 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.126066 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.126076 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx7pz\" (UniqueName: \"kubernetes.io/projected/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69-kube-api-access-hx7pz\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.339992 4797 generic.go:334] "Generic (PLEG): container finished" podID="c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" containerID="237fbd2c9f9ab1c670889ba49ec97708600ce7b0572f90b6f5974e875411ed9c" exitCode=0 Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.340131 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.340158 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69","Type":"ContainerDied","Data":"237fbd2c9f9ab1c670889ba49ec97708600ce7b0572f90b6f5974e875411ed9c"} Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.343687 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69","Type":"ContainerDied","Data":"3e9e7b440e16d184c64252916138fa9cd827eb020d2086d566403d8a2b3f9390"} Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.343718 4797 scope.go:117] "RemoveContainer" containerID="237fbd2c9f9ab1c670889ba49ec97708600ce7b0572f90b6f5974e875411ed9c" Feb 16 11:28:32 crc kubenswrapper[4797]: W0216 11:28:32.351186 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25bc0b36_a550_45a1_9632_088bfd0b2249.slice/crio-ed2c70af66918ea68e8793634181bf425ec7ff89439524973f778d7b16b0955b WatchSource:0}: Error finding container ed2c70af66918ea68e8793634181bf425ec7ff89439524973f778d7b16b0955b: Status 404 returned error can't find the container with id ed2c70af66918ea68e8793634181bf425ec7ff89439524973f778d7b16b0955b Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.353758 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.417827 4797 scope.go:117] "RemoveContainer" containerID="7bf93c70b7d726953582461b72d7965bee44cca65b399bd8e14ab1bc532606e4" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.433821 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.450670 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.460261 4797 scope.go:117] "RemoveContainer" containerID="237fbd2c9f9ab1c670889ba49ec97708600ce7b0572f90b6f5974e875411ed9c" Feb 16 11:28:32 crc kubenswrapper[4797]: E0216 11:28:32.460824 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"237fbd2c9f9ab1c670889ba49ec97708600ce7b0572f90b6f5974e875411ed9c\": container with ID starting with 237fbd2c9f9ab1c670889ba49ec97708600ce7b0572f90b6f5974e875411ed9c not found: ID does not exist" containerID="237fbd2c9f9ab1c670889ba49ec97708600ce7b0572f90b6f5974e875411ed9c" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.460865 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"237fbd2c9f9ab1c670889ba49ec97708600ce7b0572f90b6f5974e875411ed9c"} err="failed to get container status \"237fbd2c9f9ab1c670889ba49ec97708600ce7b0572f90b6f5974e875411ed9c\": rpc error: code = NotFound desc = could not find container \"237fbd2c9f9ab1c670889ba49ec97708600ce7b0572f90b6f5974e875411ed9c\": container with ID starting with 237fbd2c9f9ab1c670889ba49ec97708600ce7b0572f90b6f5974e875411ed9c not found: ID does not exist" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.460909 4797 scope.go:117] "RemoveContainer" containerID="7bf93c70b7d726953582461b72d7965bee44cca65b399bd8e14ab1bc532606e4" Feb 16 11:28:32 crc kubenswrapper[4797]: E0216 11:28:32.461169 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf93c70b7d726953582461b72d7965bee44cca65b399bd8e14ab1bc532606e4\": container with ID starting with 7bf93c70b7d726953582461b72d7965bee44cca65b399bd8e14ab1bc532606e4 not found: ID does not exist" containerID="7bf93c70b7d726953582461b72d7965bee44cca65b399bd8e14ab1bc532606e4" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.461206 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf93c70b7d726953582461b72d7965bee44cca65b399bd8e14ab1bc532606e4"} err="failed to get container status \"7bf93c70b7d726953582461b72d7965bee44cca65b399bd8e14ab1bc532606e4\": rpc error: code = NotFound desc = could not find container \"7bf93c70b7d726953582461b72d7965bee44cca65b399bd8e14ab1bc532606e4\": container with ID starting with 7bf93c70b7d726953582461b72d7965bee44cca65b399bd8e14ab1bc532606e4 not found: ID does not exist" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.466982 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:32 crc kubenswrapper[4797]: E0216 11:28:32.467523 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" containerName="nova-api-api" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.467546 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" containerName="nova-api-api" Feb 16 11:28:32 crc kubenswrapper[4797]: E0216 11:28:32.467608 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" containerName="nova-api-log" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.467617 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" containerName="nova-api-log" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.467862 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" containerName="nova-api-log" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.467907 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" containerName="nova-api-api" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.469386 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.471323 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.472161 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.473528 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.483958 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.634381 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-logs\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.634423 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.634448 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-public-tls-certs\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.634475 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.634699 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-config-data\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.634724 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpzhx\" (UniqueName: \"kubernetes.io/projected/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-kube-api-access-jpzhx\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.736922 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.737039 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-config-data\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.737059 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpzhx\" (UniqueName: \"kubernetes.io/projected/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-kube-api-access-jpzhx\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.737187 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-logs\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.737205 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.737229 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-public-tls-certs\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.737776 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-logs\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.741669 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.741715 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.746953 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-public-tls-certs\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.750267 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-config-data\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.764411 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpzhx\" (UniqueName: \"kubernetes.io/projected/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-kube-api-access-jpzhx\") pod \"nova-api-0\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.795200 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.898697 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 11:28:32 crc kubenswrapper[4797]: I0216 11:28:32.899035 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 11:28:33 crc kubenswrapper[4797]: I0216 11:28:33.362558 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:33 crc kubenswrapper[4797]: I0216 11:28:33.370382 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25bc0b36-a550-45a1-9632-088bfd0b2249","Type":"ContainerStarted","Data":"f4372debb0ad6b9a3b4675128019b045d7dd421560ed3427de8537945bfd93c0"} Feb 16 11:28:33 crc kubenswrapper[4797]: I0216 11:28:33.370417 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25bc0b36-a550-45a1-9632-088bfd0b2249","Type":"ContainerStarted","Data":"ed2c70af66918ea68e8793634181bf425ec7ff89439524973f778d7b16b0955b"} Feb 16 11:28:33 crc kubenswrapper[4797]: I0216 11:28:33.613726 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:33 crc kubenswrapper[4797]: I0216 11:28:33.634304 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:33 crc kubenswrapper[4797]: I0216 11:28:33.920741 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 11:28:33 crc kubenswrapper[4797]: I0216 11:28:33.920707 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 11:28:33 crc kubenswrapper[4797]: I0216 11:28:33.995942 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69" path="/var/lib/kubelet/pods/c6b3a375-d1e4-4c0a-a071-b3b4d7ac9d69/volumes" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.383028 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cafa8d9-cdde-4277-8756-12acaf5c2bf2","Type":"ContainerStarted","Data":"a9a1f2a81de2cf4a7848326e29b85b2e21589587719788453a89894098ebcd25"} Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.383074 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cafa8d9-cdde-4277-8756-12acaf5c2bf2","Type":"ContainerStarted","Data":"f9856e0add7a140933815dc46addac0e0c3fae40c30a6b898f77a4b2671c90cb"} Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.383086 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cafa8d9-cdde-4277-8756-12acaf5c2bf2","Type":"ContainerStarted","Data":"e18a81a8af8791a09334f1a59345cc02a64ef5cbeff3b226db195fe16a68cc08"} Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.412804 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.412763666 podStartE2EDuration="2.412763666s" podCreationTimestamp="2026-02-16 11:28:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:28:34.407934035 +0000 UTC m=+1309.128119015" watchObservedRunningTime="2026-02-16 11:28:34.412763666 +0000 UTC m=+1309.132948656" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.418900 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.603656 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bxff6"] Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.605093 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.607531 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.608739 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.623305 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bxff6"] Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.789293 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtm9t\" (UniqueName: \"kubernetes.io/projected/0b3a376f-d5a3-4695-8ace-93c71d98e93b-kube-api-access-wtm9t\") pod \"nova-cell1-cell-mapping-bxff6\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.789713 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-config-data\") pod \"nova-cell1-cell-mapping-bxff6\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.789927 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bxff6\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.790078 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-scripts\") pod \"nova-cell1-cell-mapping-bxff6\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.892610 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bxff6\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.892940 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-scripts\") pod \"nova-cell1-cell-mapping-bxff6\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.893144 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtm9t\" (UniqueName: \"kubernetes.io/projected/0b3a376f-d5a3-4695-8ace-93c71d98e93b-kube-api-access-wtm9t\") pod \"nova-cell1-cell-mapping-bxff6\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.893283 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-config-data\") pod \"nova-cell1-cell-mapping-bxff6\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.897816 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-scripts\") pod \"nova-cell1-cell-mapping-bxff6\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.898145 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-config-data\") pod \"nova-cell1-cell-mapping-bxff6\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.899105 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bxff6\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:34 crc kubenswrapper[4797]: I0216 11:28:34.911400 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtm9t\" (UniqueName: \"kubernetes.io/projected/0b3a376f-d5a3-4695-8ace-93c71d98e93b-kube-api-access-wtm9t\") pod \"nova-cell1-cell-mapping-bxff6\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:35 crc kubenswrapper[4797]: I0216 11:28:35.020601 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:35 crc kubenswrapper[4797]: I0216 11:28:35.407368 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25bc0b36-a550-45a1-9632-088bfd0b2249","Type":"ContainerStarted","Data":"6e3d727de561724e8fd482a36cc0e21db4ece3ef1994976d72e220cf4588c5dd"} Feb 16 11:28:35 crc kubenswrapper[4797]: I0216 11:28:35.515131 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bxff6"] Feb 16 11:28:35 crc kubenswrapper[4797]: W0216 11:28:35.524701 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b3a376f_d5a3_4695_8ace_93c71d98e93b.slice/crio-63115d9ebe7042908752d503132cc707610182926048bd3f508d915be0e30bf9 WatchSource:0}: Error finding container 63115d9ebe7042908752d503132cc707610182926048bd3f508d915be0e30bf9: Status 404 returned error can't find the container with id 63115d9ebe7042908752d503132cc707610182926048bd3f508d915be0e30bf9 Feb 16 11:28:35 crc kubenswrapper[4797]: I0216 11:28:35.805857 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-h2nsb" Feb 16 11:28:35 crc kubenswrapper[4797]: I0216 11:28:35.874434 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-jm6gb"] Feb 16 11:28:35 crc kubenswrapper[4797]: I0216 11:28:35.875093 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" podUID="13a4aa0f-f231-4931-b9c4-78f032d96d5f" containerName="dnsmasq-dns" containerID="cri-o://d47cca45405da8d9352199490ddecb8520d703efaef276cde64f167eed2decd2" gracePeriod=10 Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.424947 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25bc0b36-a550-45a1-9632-088bfd0b2249","Type":"ContainerStarted","Data":"acbc4c18d1ad7b6a32faed07f112eecc0405c77b4078bdac770b72d9625f2c26"} Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.429299 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bxff6" event={"ID":"0b3a376f-d5a3-4695-8ace-93c71d98e93b","Type":"ContainerStarted","Data":"021289fd304faa39aaf690303feba21fcbb31f01149499091b3dd610d009e0a6"} Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.429668 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bxff6" event={"ID":"0b3a376f-d5a3-4695-8ace-93c71d98e93b","Type":"ContainerStarted","Data":"63115d9ebe7042908752d503132cc707610182926048bd3f508d915be0e30bf9"} Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.436295 4797 generic.go:334] "Generic (PLEG): container finished" podID="13a4aa0f-f231-4931-b9c4-78f032d96d5f" containerID="d47cca45405da8d9352199490ddecb8520d703efaef276cde64f167eed2decd2" exitCode=0 Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.436351 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" event={"ID":"13a4aa0f-f231-4931-b9c4-78f032d96d5f","Type":"ContainerDied","Data":"d47cca45405da8d9352199490ddecb8520d703efaef276cde64f167eed2decd2"} Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.436382 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" event={"ID":"13a4aa0f-f231-4931-b9c4-78f032d96d5f","Type":"ContainerDied","Data":"57b36923d8414677db5e19a0c191714088cdedc40ef3a3d3c8ba9e771f234b18"} Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.436396 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57b36923d8414677db5e19a0c191714088cdedc40ef3a3d3c8ba9e771f234b18" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.466241 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bxff6" podStartSLOduration=2.466223947 podStartE2EDuration="2.466223947s" podCreationTimestamp="2026-02-16 11:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:28:36.444966296 +0000 UTC m=+1311.165151276" watchObservedRunningTime="2026-02-16 11:28:36.466223947 +0000 UTC m=+1311.186408927" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.475557 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.651338 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-ovsdbserver-nb\") pod \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.651406 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-ovsdbserver-sb\") pod \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.651430 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-dns-svc\") pod \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.651492 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6ztw\" (UniqueName: \"kubernetes.io/projected/13a4aa0f-f231-4931-b9c4-78f032d96d5f-kube-api-access-x6ztw\") pod \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.651526 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-config\") pod \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.651727 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-dns-swift-storage-0\") pod \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\" (UID: \"13a4aa0f-f231-4931-b9c4-78f032d96d5f\") " Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.694036 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a4aa0f-f231-4931-b9c4-78f032d96d5f-kube-api-access-x6ztw" (OuterVolumeSpecName: "kube-api-access-x6ztw") pod "13a4aa0f-f231-4931-b9c4-78f032d96d5f" (UID: "13a4aa0f-f231-4931-b9c4-78f032d96d5f"). InnerVolumeSpecName "kube-api-access-x6ztw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.725402 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "13a4aa0f-f231-4931-b9c4-78f032d96d5f" (UID: "13a4aa0f-f231-4931-b9c4-78f032d96d5f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.747774 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-config" (OuterVolumeSpecName: "config") pod "13a4aa0f-f231-4931-b9c4-78f032d96d5f" (UID: "13a4aa0f-f231-4931-b9c4-78f032d96d5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.756966 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.757001 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6ztw\" (UniqueName: \"kubernetes.io/projected/13a4aa0f-f231-4931-b9c4-78f032d96d5f-kube-api-access-x6ztw\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.757018 4797 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-config\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.797136 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "13a4aa0f-f231-4931-b9c4-78f032d96d5f" (UID: "13a4aa0f-f231-4931-b9c4-78f032d96d5f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.820211 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "13a4aa0f-f231-4931-b9c4-78f032d96d5f" (UID: "13a4aa0f-f231-4931-b9c4-78f032d96d5f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.848052 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "13a4aa0f-f231-4931-b9c4-78f032d96d5f" (UID: "13a4aa0f-f231-4931-b9c4-78f032d96d5f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.860317 4797 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.860354 4797 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:36 crc kubenswrapper[4797]: I0216 11:28:36.860391 4797 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/13a4aa0f-f231-4931-b9c4-78f032d96d5f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:37 crc kubenswrapper[4797]: I0216 11:28:37.448766 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"25bc0b36-a550-45a1-9632-088bfd0b2249","Type":"ContainerStarted","Data":"a63ab1f0ec381914e2594680a246ddc4277134d36d868b1b3d40157eda9eb8ea"} Feb 16 11:28:37 crc kubenswrapper[4797]: I0216 11:28:37.449004 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 11:28:37 crc kubenswrapper[4797]: I0216 11:28:37.448818 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" Feb 16 11:28:37 crc kubenswrapper[4797]: I0216 11:28:37.474475 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.840188603 podStartE2EDuration="6.474451451s" podCreationTimestamp="2026-02-16 11:28:31 +0000 UTC" firstStartedPulling="2026-02-16 11:28:32.35978893 +0000 UTC m=+1307.079973910" lastFinishedPulling="2026-02-16 11:28:36.994051778 +0000 UTC m=+1311.714236758" observedRunningTime="2026-02-16 11:28:37.468393145 +0000 UTC m=+1312.188578145" watchObservedRunningTime="2026-02-16 11:28:37.474451451 +0000 UTC m=+1312.194636431" Feb 16 11:28:37 crc kubenswrapper[4797]: I0216 11:28:37.503838 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-jm6gb"] Feb 16 11:28:37 crc kubenswrapper[4797]: I0216 11:28:37.519750 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-jm6gb"] Feb 16 11:28:38 crc kubenswrapper[4797]: I0216 11:28:37.998317 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a4aa0f-f231-4931-b9c4-78f032d96d5f" path="/var/lib/kubelet/pods/13a4aa0f-f231-4931-b9c4-78f032d96d5f/volumes" Feb 16 11:28:40 crc kubenswrapper[4797]: E0216 11:28:40.993366 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:28:41 crc kubenswrapper[4797]: I0216 11:28:41.258774 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-jm6gb" podUID="13a4aa0f-f231-4931-b9c4-78f032d96d5f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.210:5353: i/o timeout" Feb 16 11:28:41 crc kubenswrapper[4797]: I0216 11:28:41.490807 4797 generic.go:334] "Generic (PLEG): container finished" podID="0b3a376f-d5a3-4695-8ace-93c71d98e93b" containerID="021289fd304faa39aaf690303feba21fcbb31f01149499091b3dd610d009e0a6" exitCode=0 Feb 16 11:28:41 crc kubenswrapper[4797]: I0216 11:28:41.490866 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bxff6" event={"ID":"0b3a376f-d5a3-4695-8ace-93c71d98e93b","Type":"ContainerDied","Data":"021289fd304faa39aaf690303feba21fcbb31f01149499091b3dd610d009e0a6"} Feb 16 11:28:42 crc kubenswrapper[4797]: I0216 11:28:42.795706 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 11:28:42 crc kubenswrapper[4797]: I0216 11:28:42.797089 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 11:28:42 crc kubenswrapper[4797]: I0216 11:28:42.880908 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:42 crc kubenswrapper[4797]: I0216 11:28:42.908807 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 11:28:42 crc kubenswrapper[4797]: I0216 11:28:42.917013 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 11:28:42 crc kubenswrapper[4797]: I0216 11:28:42.926169 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.011127 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtm9t\" (UniqueName: \"kubernetes.io/projected/0b3a376f-d5a3-4695-8ace-93c71d98e93b-kube-api-access-wtm9t\") pod \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.011615 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-config-data\") pod \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.011706 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-scripts\") pod \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.011771 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-combined-ca-bundle\") pod \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\" (UID: \"0b3a376f-d5a3-4695-8ace-93c71d98e93b\") " Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.017720 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3a376f-d5a3-4695-8ace-93c71d98e93b-kube-api-access-wtm9t" (OuterVolumeSpecName: "kube-api-access-wtm9t") pod "0b3a376f-d5a3-4695-8ace-93c71d98e93b" (UID: "0b3a376f-d5a3-4695-8ace-93c71d98e93b"). InnerVolumeSpecName "kube-api-access-wtm9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.034822 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-scripts" (OuterVolumeSpecName: "scripts") pod "0b3a376f-d5a3-4695-8ace-93c71d98e93b" (UID: "0b3a376f-d5a3-4695-8ace-93c71d98e93b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.044428 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-config-data" (OuterVolumeSpecName: "config-data") pod "0b3a376f-d5a3-4695-8ace-93c71d98e93b" (UID: "0b3a376f-d5a3-4695-8ace-93c71d98e93b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.061194 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b3a376f-d5a3-4695-8ace-93c71d98e93b" (UID: "0b3a376f-d5a3-4695-8ace-93c71d98e93b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.115829 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.115860 4797 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.115872 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3a376f-d5a3-4695-8ace-93c71d98e93b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.115883 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtm9t\" (UniqueName: \"kubernetes.io/projected/0b3a376f-d5a3-4695-8ace-93c71d98e93b-kube-api-access-wtm9t\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.514016 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bxff6" event={"ID":"0b3a376f-d5a3-4695-8ace-93c71d98e93b","Type":"ContainerDied","Data":"63115d9ebe7042908752d503132cc707610182926048bd3f508d915be0e30bf9"} Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.514067 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63115d9ebe7042908752d503132cc707610182926048bd3f508d915be0e30bf9" Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.514680 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bxff6" Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.535770 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.768413 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.768727 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b7231a84-883c-463c-958e-0d2222057f5e" containerName="nova-scheduler-scheduler" containerID="cri-o://332b9d5178a1bc554c8f8fda345853e496aeb98b5e32eabc49e59e2a326e7d5c" gracePeriod=30 Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.787204 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.799915 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.807915 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3cafa8d9-cdde-4277-8756-12acaf5c2bf2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 11:28:43 crc kubenswrapper[4797]: I0216 11:28:43.807974 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3cafa8d9-cdde-4277-8756-12acaf5c2bf2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.221:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 11:28:44 crc kubenswrapper[4797]: I0216 11:28:44.526652 4797 generic.go:334] "Generic (PLEG): container finished" podID="b7231a84-883c-463c-958e-0d2222057f5e" containerID="332b9d5178a1bc554c8f8fda345853e496aeb98b5e32eabc49e59e2a326e7d5c" exitCode=0 Feb 16 11:28:44 crc kubenswrapper[4797]: I0216 11:28:44.527682 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b7231a84-883c-463c-958e-0d2222057f5e","Type":"ContainerDied","Data":"332b9d5178a1bc554c8f8fda345853e496aeb98b5e32eabc49e59e2a326e7d5c"} Feb 16 11:28:44 crc kubenswrapper[4797]: I0216 11:28:44.527923 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3cafa8d9-cdde-4277-8756-12acaf5c2bf2" containerName="nova-api-log" containerID="cri-o://f9856e0add7a140933815dc46addac0e0c3fae40c30a6b898f77a4b2671c90cb" gracePeriod=30 Feb 16 11:28:44 crc kubenswrapper[4797]: I0216 11:28:44.528018 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3cafa8d9-cdde-4277-8756-12acaf5c2bf2" containerName="nova-api-api" containerID="cri-o://a9a1f2a81de2cf4a7848326e29b85b2e21589587719788453a89894098ebcd25" gracePeriod=30 Feb 16 11:28:44 crc kubenswrapper[4797]: I0216 11:28:44.831621 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 11:28:44 crc kubenswrapper[4797]: I0216 11:28:44.962540 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7231a84-883c-463c-958e-0d2222057f5e-combined-ca-bundle\") pod \"b7231a84-883c-463c-958e-0d2222057f5e\" (UID: \"b7231a84-883c-463c-958e-0d2222057f5e\") " Feb 16 11:28:44 crc kubenswrapper[4797]: I0216 11:28:44.962760 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7231a84-883c-463c-958e-0d2222057f5e-config-data\") pod \"b7231a84-883c-463c-958e-0d2222057f5e\" (UID: \"b7231a84-883c-463c-958e-0d2222057f5e\") " Feb 16 11:28:44 crc kubenswrapper[4797]: I0216 11:28:44.962915 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjg4x\" (UniqueName: \"kubernetes.io/projected/b7231a84-883c-463c-958e-0d2222057f5e-kube-api-access-gjg4x\") pod \"b7231a84-883c-463c-958e-0d2222057f5e\" (UID: \"b7231a84-883c-463c-958e-0d2222057f5e\") " Feb 16 11:28:44 crc kubenswrapper[4797]: I0216 11:28:44.991521 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7231a84-883c-463c-958e-0d2222057f5e-kube-api-access-gjg4x" (OuterVolumeSpecName: "kube-api-access-gjg4x") pod "b7231a84-883c-463c-958e-0d2222057f5e" (UID: "b7231a84-883c-463c-958e-0d2222057f5e"). InnerVolumeSpecName "kube-api-access-gjg4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.002787 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7231a84-883c-463c-958e-0d2222057f5e-config-data" (OuterVolumeSpecName: "config-data") pod "b7231a84-883c-463c-958e-0d2222057f5e" (UID: "b7231a84-883c-463c-958e-0d2222057f5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.011811 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7231a84-883c-463c-958e-0d2222057f5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7231a84-883c-463c-958e-0d2222057f5e" (UID: "b7231a84-883c-463c-958e-0d2222057f5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.065757 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7231a84-883c-463c-958e-0d2222057f5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.066135 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7231a84-883c-463c-958e-0d2222057f5e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.066245 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjg4x\" (UniqueName: \"kubernetes.io/projected/b7231a84-883c-463c-958e-0d2222057f5e-kube-api-access-gjg4x\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.540048 4797 generic.go:334] "Generic (PLEG): container finished" podID="3cafa8d9-cdde-4277-8756-12acaf5c2bf2" containerID="f9856e0add7a140933815dc46addac0e0c3fae40c30a6b898f77a4b2671c90cb" exitCode=143 Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.540121 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cafa8d9-cdde-4277-8756-12acaf5c2bf2","Type":"ContainerDied","Data":"f9856e0add7a140933815dc46addac0e0c3fae40c30a6b898f77a4b2671c90cb"} Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.541824 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b7231a84-883c-463c-958e-0d2222057f5e","Type":"ContainerDied","Data":"dd8392c6e29f06135d51a0addff91135aa72f2756d64e1286bafbc16f944028f"} Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.541843 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.541882 4797 scope.go:117] "RemoveContainer" containerID="332b9d5178a1bc554c8f8fda345853e496aeb98b5e32eabc49e59e2a326e7d5c" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.541948 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerName="nova-metadata-log" containerID="cri-o://4314bf0e0446459c6debb8cc6f37c87b3e707df9942e52bbc90edef677355b97" gracePeriod=30 Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.542001 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerName="nova-metadata-metadata" containerID="cri-o://28d91196f3cf2a352faa86228893fdf0f036269b941943fb60b5e051ef7714cb" gracePeriod=30 Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.581950 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.596770 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.605895 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:28:45 crc kubenswrapper[4797]: E0216 11:28:45.606415 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a4aa0f-f231-4931-b9c4-78f032d96d5f" containerName="init" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.606445 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a4aa0f-f231-4931-b9c4-78f032d96d5f" containerName="init" Feb 16 11:28:45 crc kubenswrapper[4797]: E0216 11:28:45.606843 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3a376f-d5a3-4695-8ace-93c71d98e93b" containerName="nova-manage" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.606864 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3a376f-d5a3-4695-8ace-93c71d98e93b" containerName="nova-manage" Feb 16 11:28:45 crc kubenswrapper[4797]: E0216 11:28:45.606880 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7231a84-883c-463c-958e-0d2222057f5e" containerName="nova-scheduler-scheduler" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.606890 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7231a84-883c-463c-958e-0d2222057f5e" containerName="nova-scheduler-scheduler" Feb 16 11:28:45 crc kubenswrapper[4797]: E0216 11:28:45.606907 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13a4aa0f-f231-4931-b9c4-78f032d96d5f" containerName="dnsmasq-dns" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.606915 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="13a4aa0f-f231-4931-b9c4-78f032d96d5f" containerName="dnsmasq-dns" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.607172 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="13a4aa0f-f231-4931-b9c4-78f032d96d5f" containerName="dnsmasq-dns" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.607199 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7231a84-883c-463c-958e-0d2222057f5e" containerName="nova-scheduler-scheduler" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.607223 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3a376f-d5a3-4695-8ace-93c71d98e93b" containerName="nova-manage" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.608215 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.610453 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.620000 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.779419 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fv7t\" (UniqueName: \"kubernetes.io/projected/560a6700-80d9-4db4-9fef-425ac7981273-kube-api-access-7fv7t\") pod \"nova-scheduler-0\" (UID: \"560a6700-80d9-4db4-9fef-425ac7981273\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.779823 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560a6700-80d9-4db4-9fef-425ac7981273-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"560a6700-80d9-4db4-9fef-425ac7981273\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.779879 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560a6700-80d9-4db4-9fef-425ac7981273-config-data\") pod \"nova-scheduler-0\" (UID: \"560a6700-80d9-4db4-9fef-425ac7981273\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.881621 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fv7t\" (UniqueName: \"kubernetes.io/projected/560a6700-80d9-4db4-9fef-425ac7981273-kube-api-access-7fv7t\") pod \"nova-scheduler-0\" (UID: \"560a6700-80d9-4db4-9fef-425ac7981273\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.881744 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560a6700-80d9-4db4-9fef-425ac7981273-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"560a6700-80d9-4db4-9fef-425ac7981273\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.881804 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560a6700-80d9-4db4-9fef-425ac7981273-config-data\") pod \"nova-scheduler-0\" (UID: \"560a6700-80d9-4db4-9fef-425ac7981273\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.885689 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560a6700-80d9-4db4-9fef-425ac7981273-config-data\") pod \"nova-scheduler-0\" (UID: \"560a6700-80d9-4db4-9fef-425ac7981273\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.886026 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560a6700-80d9-4db4-9fef-425ac7981273-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"560a6700-80d9-4db4-9fef-425ac7981273\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.908320 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fv7t\" (UniqueName: \"kubernetes.io/projected/560a6700-80d9-4db4-9fef-425ac7981273-kube-api-access-7fv7t\") pod \"nova-scheduler-0\" (UID: \"560a6700-80d9-4db4-9fef-425ac7981273\") " pod="openstack/nova-scheduler-0" Feb 16 11:28:45 crc kubenswrapper[4797]: I0216 11:28:45.926931 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 11:28:46 crc kubenswrapper[4797]: I0216 11:28:46.008140 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7231a84-883c-463c-958e-0d2222057f5e" path="/var/lib/kubelet/pods/b7231a84-883c-463c-958e-0d2222057f5e/volumes" Feb 16 11:28:46 crc kubenswrapper[4797]: I0216 11:28:46.461661 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 11:28:46 crc kubenswrapper[4797]: I0216 11:28:46.559976 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"560a6700-80d9-4db4-9fef-425ac7981273","Type":"ContainerStarted","Data":"60b0643a7927a0db3e7200de0d69c953cd20ca2b93fac9150841ba3942b014e1"} Feb 16 11:28:46 crc kubenswrapper[4797]: I0216 11:28:46.564902 4797 generic.go:334] "Generic (PLEG): container finished" podID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerID="4314bf0e0446459c6debb8cc6f37c87b3e707df9942e52bbc90edef677355b97" exitCode=143 Feb 16 11:28:46 crc kubenswrapper[4797]: I0216 11:28:46.564925 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1a2493bc-7d20-447c-9b50-1b0a283ae30e","Type":"ContainerDied","Data":"4314bf0e0446459c6debb8cc6f37c87b3e707df9942e52bbc90edef677355b97"} Feb 16 11:28:47 crc kubenswrapper[4797]: I0216 11:28:47.575505 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"560a6700-80d9-4db4-9fef-425ac7981273","Type":"ContainerStarted","Data":"6f329b47bc7dd39d21324b0f5f74e77e741b4a69993c40f69b0727f51bbdfdad"} Feb 16 11:28:47 crc kubenswrapper[4797]: I0216 11:28:47.600885 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.60086571 podStartE2EDuration="2.60086571s" podCreationTimestamp="2026-02-16 11:28:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:28:47.592491101 +0000 UTC m=+1322.312676091" watchObservedRunningTime="2026-02-16 11:28:47.60086571 +0000 UTC m=+1322.321050690" Feb 16 11:28:48 crc kubenswrapper[4797]: I0216 11:28:48.670840 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:50836->10.217.0.217:8775: read: connection reset by peer" Feb 16 11:28:48 crc kubenswrapper[4797]: I0216 11:28:48.672380 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:50834->10.217.0.217:8775: read: connection reset by peer" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.265811 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.358001 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmhv4\" (UniqueName: \"kubernetes.io/projected/1a2493bc-7d20-447c-9b50-1b0a283ae30e-kube-api-access-qmhv4\") pod \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.358232 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-nova-metadata-tls-certs\") pod \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.358359 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a2493bc-7d20-447c-9b50-1b0a283ae30e-logs\") pod \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.358447 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-combined-ca-bundle\") pod \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.358532 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-config-data\") pod \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\" (UID: \"1a2493bc-7d20-447c-9b50-1b0a283ae30e\") " Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.360402 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a2493bc-7d20-447c-9b50-1b0a283ae30e-logs" (OuterVolumeSpecName: "logs") pod "1a2493bc-7d20-447c-9b50-1b0a283ae30e" (UID: "1a2493bc-7d20-447c-9b50-1b0a283ae30e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.384396 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a2493bc-7d20-447c-9b50-1b0a283ae30e-kube-api-access-qmhv4" (OuterVolumeSpecName: "kube-api-access-qmhv4") pod "1a2493bc-7d20-447c-9b50-1b0a283ae30e" (UID: "1a2493bc-7d20-447c-9b50-1b0a283ae30e"). InnerVolumeSpecName "kube-api-access-qmhv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.405047 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a2493bc-7d20-447c-9b50-1b0a283ae30e" (UID: "1a2493bc-7d20-447c-9b50-1b0a283ae30e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.420182 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1a2493bc-7d20-447c-9b50-1b0a283ae30e" (UID: "1a2493bc-7d20-447c-9b50-1b0a283ae30e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.432367 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-config-data" (OuterVolumeSpecName: "config-data") pod "1a2493bc-7d20-447c-9b50-1b0a283ae30e" (UID: "1a2493bc-7d20-447c-9b50-1b0a283ae30e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.442371 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.460530 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmhv4\" (UniqueName: \"kubernetes.io/projected/1a2493bc-7d20-447c-9b50-1b0a283ae30e-kube-api-access-qmhv4\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.460561 4797 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.460585 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a2493bc-7d20-447c-9b50-1b0a283ae30e-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.460599 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.460611 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a2493bc-7d20-447c-9b50-1b0a283ae30e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.561703 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-public-tls-certs\") pod \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.561761 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-logs\") pod \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.561827 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpzhx\" (UniqueName: \"kubernetes.io/projected/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-kube-api-access-jpzhx\") pod \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.561982 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-internal-tls-certs\") pod \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.562629 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-logs" (OuterVolumeSpecName: "logs") pod "3cafa8d9-cdde-4277-8756-12acaf5c2bf2" (UID: "3cafa8d9-cdde-4277-8756-12acaf5c2bf2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.563366 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-config-data\") pod \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.563413 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-combined-ca-bundle\") pod \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\" (UID: \"3cafa8d9-cdde-4277-8756-12acaf5c2bf2\") " Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.564657 4797 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-logs\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.566886 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-kube-api-access-jpzhx" (OuterVolumeSpecName: "kube-api-access-jpzhx") pod "3cafa8d9-cdde-4277-8756-12acaf5c2bf2" (UID: "3cafa8d9-cdde-4277-8756-12acaf5c2bf2"). InnerVolumeSpecName "kube-api-access-jpzhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.588227 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-config-data" (OuterVolumeSpecName: "config-data") pod "3cafa8d9-cdde-4277-8756-12acaf5c2bf2" (UID: "3cafa8d9-cdde-4277-8756-12acaf5c2bf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.600326 4797 generic.go:334] "Generic (PLEG): container finished" podID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerID="28d91196f3cf2a352faa86228893fdf0f036269b941943fb60b5e051ef7714cb" exitCode=0 Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.600415 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1a2493bc-7d20-447c-9b50-1b0a283ae30e","Type":"ContainerDied","Data":"28d91196f3cf2a352faa86228893fdf0f036269b941943fb60b5e051ef7714cb"} Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.600451 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1a2493bc-7d20-447c-9b50-1b0a283ae30e","Type":"ContainerDied","Data":"74de123b3d02be3de614d1f5ece04ebf7c2b7beb8541aabea44ef64d4577999c"} Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.600469 4797 scope.go:117] "RemoveContainer" containerID="28d91196f3cf2a352faa86228893fdf0f036269b941943fb60b5e051ef7714cb" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.600680 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.606665 4797 generic.go:334] "Generic (PLEG): container finished" podID="3cafa8d9-cdde-4277-8756-12acaf5c2bf2" containerID="a9a1f2a81de2cf4a7848326e29b85b2e21589587719788453a89894098ebcd25" exitCode=0 Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.606745 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cafa8d9-cdde-4277-8756-12acaf5c2bf2","Type":"ContainerDied","Data":"a9a1f2a81de2cf4a7848326e29b85b2e21589587719788453a89894098ebcd25"} Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.606768 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3cafa8d9-cdde-4277-8756-12acaf5c2bf2","Type":"ContainerDied","Data":"e18a81a8af8791a09334f1a59345cc02a64ef5cbeff3b226db195fe16a68cc08"} Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.606832 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.608608 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3cafa8d9-cdde-4277-8756-12acaf5c2bf2" (UID: "3cafa8d9-cdde-4277-8756-12acaf5c2bf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.613775 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3cafa8d9-cdde-4277-8756-12acaf5c2bf2" (UID: "3cafa8d9-cdde-4277-8756-12acaf5c2bf2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.643671 4797 scope.go:117] "RemoveContainer" containerID="4314bf0e0446459c6debb8cc6f37c87b3e707df9942e52bbc90edef677355b97" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.645385 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3cafa8d9-cdde-4277-8756-12acaf5c2bf2" (UID: "3cafa8d9-cdde-4277-8756-12acaf5c2bf2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.672660 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.673796 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.673882 4797 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.673989 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpzhx\" (UniqueName: \"kubernetes.io/projected/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-kube-api-access-jpzhx\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.674168 4797 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3cafa8d9-cdde-4277-8756-12acaf5c2bf2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.675964 4797 scope.go:117] "RemoveContainer" containerID="28d91196f3cf2a352faa86228893fdf0f036269b941943fb60b5e051ef7714cb" Feb 16 11:28:49 crc kubenswrapper[4797]: E0216 11:28:49.677080 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28d91196f3cf2a352faa86228893fdf0f036269b941943fb60b5e051ef7714cb\": container with ID starting with 28d91196f3cf2a352faa86228893fdf0f036269b941943fb60b5e051ef7714cb not found: ID does not exist" containerID="28d91196f3cf2a352faa86228893fdf0f036269b941943fb60b5e051ef7714cb" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.677143 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d91196f3cf2a352faa86228893fdf0f036269b941943fb60b5e051ef7714cb"} err="failed to get container status \"28d91196f3cf2a352faa86228893fdf0f036269b941943fb60b5e051ef7714cb\": rpc error: code = NotFound desc = could not find container \"28d91196f3cf2a352faa86228893fdf0f036269b941943fb60b5e051ef7714cb\": container with ID starting with 28d91196f3cf2a352faa86228893fdf0f036269b941943fb60b5e051ef7714cb not found: ID does not exist" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.677170 4797 scope.go:117] "RemoveContainer" containerID="4314bf0e0446459c6debb8cc6f37c87b3e707df9942e52bbc90edef677355b97" Feb 16 11:28:49 crc kubenswrapper[4797]: E0216 11:28:49.677887 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4314bf0e0446459c6debb8cc6f37c87b3e707df9942e52bbc90edef677355b97\": container with ID starting with 4314bf0e0446459c6debb8cc6f37c87b3e707df9942e52bbc90edef677355b97 not found: ID does not exist" containerID="4314bf0e0446459c6debb8cc6f37c87b3e707df9942e52bbc90edef677355b97" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.677928 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4314bf0e0446459c6debb8cc6f37c87b3e707df9942e52bbc90edef677355b97"} err="failed to get container status \"4314bf0e0446459c6debb8cc6f37c87b3e707df9942e52bbc90edef677355b97\": rpc error: code = NotFound desc = could not find container \"4314bf0e0446459c6debb8cc6f37c87b3e707df9942e52bbc90edef677355b97\": container with ID starting with 4314bf0e0446459c6debb8cc6f37c87b3e707df9942e52bbc90edef677355b97 not found: ID does not exist" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.677956 4797 scope.go:117] "RemoveContainer" containerID="a9a1f2a81de2cf4a7848326e29b85b2e21589587719788453a89894098ebcd25" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.693796 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.698415 4797 scope.go:117] "RemoveContainer" containerID="f9856e0add7a140933815dc46addac0e0c3fae40c30a6b898f77a4b2671c90cb" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.704547 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.712910 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:28:49 crc kubenswrapper[4797]: E0216 11:28:49.713981 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerName="nova-metadata-metadata" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.714010 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerName="nova-metadata-metadata" Feb 16 11:28:49 crc kubenswrapper[4797]: E0216 11:28:49.714068 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cafa8d9-cdde-4277-8756-12acaf5c2bf2" containerName="nova-api-api" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.714077 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cafa8d9-cdde-4277-8756-12acaf5c2bf2" containerName="nova-api-api" Feb 16 11:28:49 crc kubenswrapper[4797]: E0216 11:28:49.714089 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerName="nova-metadata-log" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.714097 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerName="nova-metadata-log" Feb 16 11:28:49 crc kubenswrapper[4797]: E0216 11:28:49.714117 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cafa8d9-cdde-4277-8756-12acaf5c2bf2" containerName="nova-api-log" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.714125 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cafa8d9-cdde-4277-8756-12acaf5c2bf2" containerName="nova-api-log" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.714388 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cafa8d9-cdde-4277-8756-12acaf5c2bf2" containerName="nova-api-log" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.714415 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerName="nova-metadata-log" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.714434 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cafa8d9-cdde-4277-8756-12acaf5c2bf2" containerName="nova-api-api" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.714450 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" containerName="nova-metadata-metadata" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.715884 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.720911 4797 scope.go:117] "RemoveContainer" containerID="a9a1f2a81de2cf4a7848326e29b85b2e21589587719788453a89894098ebcd25" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.721076 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.721083 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.725098 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:28:49 crc kubenswrapper[4797]: E0216 11:28:49.732089 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9a1f2a81de2cf4a7848326e29b85b2e21589587719788453a89894098ebcd25\": container with ID starting with a9a1f2a81de2cf4a7848326e29b85b2e21589587719788453a89894098ebcd25 not found: ID does not exist" containerID="a9a1f2a81de2cf4a7848326e29b85b2e21589587719788453a89894098ebcd25" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.732135 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9a1f2a81de2cf4a7848326e29b85b2e21589587719788453a89894098ebcd25"} err="failed to get container status \"a9a1f2a81de2cf4a7848326e29b85b2e21589587719788453a89894098ebcd25\": rpc error: code = NotFound desc = could not find container \"a9a1f2a81de2cf4a7848326e29b85b2e21589587719788453a89894098ebcd25\": container with ID starting with a9a1f2a81de2cf4a7848326e29b85b2e21589587719788453a89894098ebcd25 not found: ID does not exist" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.732166 4797 scope.go:117] "RemoveContainer" containerID="f9856e0add7a140933815dc46addac0e0c3fae40c30a6b898f77a4b2671c90cb" Feb 16 11:28:49 crc kubenswrapper[4797]: E0216 11:28:49.733654 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9856e0add7a140933815dc46addac0e0c3fae40c30a6b898f77a4b2671c90cb\": container with ID starting with f9856e0add7a140933815dc46addac0e0c3fae40c30a6b898f77a4b2671c90cb not found: ID does not exist" containerID="f9856e0add7a140933815dc46addac0e0c3fae40c30a6b898f77a4b2671c90cb" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.733695 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9856e0add7a140933815dc46addac0e0c3fae40c30a6b898f77a4b2671c90cb"} err="failed to get container status \"f9856e0add7a140933815dc46addac0e0c3fae40c30a6b898f77a4b2671c90cb\": rpc error: code = NotFound desc = could not find container \"f9856e0add7a140933815dc46addac0e0c3fae40c30a6b898f77a4b2671c90cb\": container with ID starting with f9856e0add7a140933815dc46addac0e0c3fae40c30a6b898f77a4b2671c90cb not found: ID does not exist" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.878544 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.878738 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-config-data\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.878782 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.878848 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j2nf\" (UniqueName: \"kubernetes.io/projected/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-kube-api-access-2j2nf\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.878927 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-logs\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.938639 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.948505 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.959404 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:49 crc kubenswrapper[4797]: I0216 11:28:49.961498 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:49.975111 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:49.979158 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:49.979432 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:49.979627 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.007988 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxf62\" (UniqueName: \"kubernetes.io/projected/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-kube-api-access-mxf62\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.008500 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-config-data\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.008569 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.008707 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.008761 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j2nf\" (UniqueName: \"kubernetes.io/projected/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-kube-api-access-2j2nf\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.008874 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-logs\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.009003 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-logs\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.009063 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.009162 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.009209 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.009332 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-config-data\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.009957 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-logs\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.013374 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.013798 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.017860 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-config-data\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.025086 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a2493bc-7d20-447c-9b50-1b0a283ae30e" path="/var/lib/kubelet/pods/1a2493bc-7d20-447c-9b50-1b0a283ae30e/volumes" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.025805 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cafa8d9-cdde-4277-8756-12acaf5c2bf2" path="/var/lib/kubelet/pods/3cafa8d9-cdde-4277-8756-12acaf5c2bf2/volumes" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.031054 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j2nf\" (UniqueName: \"kubernetes.io/projected/520dbe8b-c811-47c8-9e86-7bb3d5cc7580-kube-api-access-2j2nf\") pod \"nova-metadata-0\" (UID: \"520dbe8b-c811-47c8-9e86-7bb3d5cc7580\") " pod="openstack/nova-metadata-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.034186 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.110629 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.112868 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-logs\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.113324 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-logs\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.113530 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.113559 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.115441 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-config-data\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.115510 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxf62\" (UniqueName: \"kubernetes.io/projected/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-kube-api-access-mxf62\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.115986 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-public-tls-certs\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.116539 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.136408 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.137418 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-config-data\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.139838 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxf62\" (UniqueName: \"kubernetes.io/projected/f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a-kube-api-access-mxf62\") pod \"nova-api-0\" (UID: \"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a\") " pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.237313 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.495697 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 11:28:50 crc kubenswrapper[4797]: W0216 11:28:50.498061 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod520dbe8b_c811_47c8_9e86_7bb3d5cc7580.slice/crio-0e81aeea62ba2eb6f7595b25570df5805c3da1ec771d9939b80372aa9f0d01e8 WatchSource:0}: Error finding container 0e81aeea62ba2eb6f7595b25570df5805c3da1ec771d9939b80372aa9f0d01e8: Status 404 returned error can't find the container with id 0e81aeea62ba2eb6f7595b25570df5805c3da1ec771d9939b80372aa9f0d01e8 Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.618752 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"520dbe8b-c811-47c8-9e86-7bb3d5cc7580","Type":"ContainerStarted","Data":"0e81aeea62ba2eb6f7595b25570df5805c3da1ec771d9939b80372aa9f0d01e8"} Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.676994 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 11:28:50 crc kubenswrapper[4797]: W0216 11:28:50.680002 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9ab05ef_ab44_4a5d_bf44_f2e1e2c6699a.slice/crio-aceeb4ed187baa5a86d1d3a6fc70167e9cc7e5ea72169b9839e1c2cf24ab15c5 WatchSource:0}: Error finding container aceeb4ed187baa5a86d1d3a6fc70167e9cc7e5ea72169b9839e1c2cf24ab15c5: Status 404 returned error can't find the container with id aceeb4ed187baa5a86d1d3a6fc70167e9cc7e5ea72169b9839e1c2cf24ab15c5 Feb 16 11:28:50 crc kubenswrapper[4797]: I0216 11:28:50.928745 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 11:28:51 crc kubenswrapper[4797]: I0216 11:28:51.635277 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a","Type":"ContainerStarted","Data":"26c6efd854f49f5dc803297a9978c4931c67dafe33131698bc88b72ddac65d29"} Feb 16 11:28:51 crc kubenswrapper[4797]: I0216 11:28:51.635327 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a","Type":"ContainerStarted","Data":"ded226582609075c6a93681675a2066ec8426d2f8e92302e815eeda561fb9a1f"} Feb 16 11:28:51 crc kubenswrapper[4797]: I0216 11:28:51.635343 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a","Type":"ContainerStarted","Data":"aceeb4ed187baa5a86d1d3a6fc70167e9cc7e5ea72169b9839e1c2cf24ab15c5"} Feb 16 11:28:51 crc kubenswrapper[4797]: I0216 11:28:51.637458 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"520dbe8b-c811-47c8-9e86-7bb3d5cc7580","Type":"ContainerStarted","Data":"52f2310c4c0dc7afdab30e5975e531fd276257c769bd1b6da4f07a5ef2307f1c"} Feb 16 11:28:51 crc kubenswrapper[4797]: I0216 11:28:51.637521 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"520dbe8b-c811-47c8-9e86-7bb3d5cc7580","Type":"ContainerStarted","Data":"b6ba8400d9ed836b7b1125160f1f366b28ebe7a238625e783277be6635fcf6a8"} Feb 16 11:28:51 crc kubenswrapper[4797]: I0216 11:28:51.661292 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.661263769 podStartE2EDuration="2.661263769s" podCreationTimestamp="2026-02-16 11:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:28:51.659024128 +0000 UTC m=+1326.379209118" watchObservedRunningTime="2026-02-16 11:28:51.661263769 +0000 UTC m=+1326.381448769" Feb 16 11:28:51 crc kubenswrapper[4797]: I0216 11:28:51.689877 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.6898586509999998 podStartE2EDuration="2.689858651s" podCreationTimestamp="2026-02-16 11:28:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 11:28:51.677179793 +0000 UTC m=+1326.397364783" watchObservedRunningTime="2026-02-16 11:28:51.689858651 +0000 UTC m=+1326.410043631" Feb 16 11:28:54 crc kubenswrapper[4797]: E0216 11:28:54.988136 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:28:55 crc kubenswrapper[4797]: I0216 11:28:55.035389 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 11:28:55 crc kubenswrapper[4797]: I0216 11:28:55.035463 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 11:28:55 crc kubenswrapper[4797]: I0216 11:28:55.928217 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 11:28:55 crc kubenswrapper[4797]: I0216 11:28:55.958208 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 11:28:56 crc kubenswrapper[4797]: I0216 11:28:56.744557 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 11:29:00 crc kubenswrapper[4797]: I0216 11:29:00.035941 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 11:29:00 crc kubenswrapper[4797]: I0216 11:29:00.036662 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 11:29:00 crc kubenswrapper[4797]: I0216 11:29:00.235612 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="30bb31d4-fd04-40bc-95f0-e3c7d5bb9a25" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.213:3000/\": dial tcp 10.217.0.213:3000: i/o timeout" Feb 16 11:29:00 crc kubenswrapper[4797]: I0216 11:29:00.238369 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 11:29:00 crc kubenswrapper[4797]: I0216 11:29:00.238420 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 11:29:01 crc kubenswrapper[4797]: I0216 11:29:01.052960 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="520dbe8b-c811-47c8-9e86-7bb3d5cc7580" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 11:29:01 crc kubenswrapper[4797]: I0216 11:29:01.052967 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="520dbe8b-c811-47c8-9e86-7bb3d5cc7580" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 11:29:01 crc kubenswrapper[4797]: I0216 11:29:01.254771 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 11:29:01 crc kubenswrapper[4797]: I0216 11:29:01.254798 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 11:29:01 crc kubenswrapper[4797]: I0216 11:29:01.763224 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 11:29:09 crc kubenswrapper[4797]: E0216 11:29:09.984620 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:29:10 crc kubenswrapper[4797]: I0216 11:29:10.039814 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 11:29:10 crc kubenswrapper[4797]: I0216 11:29:10.040388 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 11:29:10 crc kubenswrapper[4797]: I0216 11:29:10.043250 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 11:29:10 crc kubenswrapper[4797]: I0216 11:29:10.245716 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 11:29:10 crc kubenswrapper[4797]: I0216 11:29:10.246241 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 11:29:10 crc kubenswrapper[4797]: I0216 11:29:10.246927 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 11:29:10 crc kubenswrapper[4797]: I0216 11:29:10.246965 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 11:29:10 crc kubenswrapper[4797]: I0216 11:29:10.253297 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 11:29:10 crc kubenswrapper[4797]: I0216 11:29:10.258023 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 11:29:10 crc kubenswrapper[4797]: I0216 11:29:10.844148 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 11:29:20 crc kubenswrapper[4797]: E0216 11:29:20.985768 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:29:31 crc kubenswrapper[4797]: E0216 11:29:31.985676 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:29:41 crc kubenswrapper[4797]: I0216 11:29:41.703178 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:29:41 crc kubenswrapper[4797]: I0216 11:29:41.703782 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:29:43 crc kubenswrapper[4797]: E0216 11:29:43.987962 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:29:57 crc kubenswrapper[4797]: E0216 11:29:57.994663 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.155263 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5"] Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.157275 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.159717 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.160175 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.175197 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5"] Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.297762 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb8c3991-e127-4b45-a7b8-f925c224c597-secret-volume\") pod \"collect-profiles-29520690-zxzb5\" (UID: \"cb8c3991-e127-4b45-a7b8-f925c224c597\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.297824 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6hqx\" (UniqueName: \"kubernetes.io/projected/cb8c3991-e127-4b45-a7b8-f925c224c597-kube-api-access-t6hqx\") pod \"collect-profiles-29520690-zxzb5\" (UID: \"cb8c3991-e127-4b45-a7b8-f925c224c597\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.297881 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8c3991-e127-4b45-a7b8-f925c224c597-config-volume\") pod \"collect-profiles-29520690-zxzb5\" (UID: \"cb8c3991-e127-4b45-a7b8-f925c224c597\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.399768 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb8c3991-e127-4b45-a7b8-f925c224c597-secret-volume\") pod \"collect-profiles-29520690-zxzb5\" (UID: \"cb8c3991-e127-4b45-a7b8-f925c224c597\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.399837 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6hqx\" (UniqueName: \"kubernetes.io/projected/cb8c3991-e127-4b45-a7b8-f925c224c597-kube-api-access-t6hqx\") pod \"collect-profiles-29520690-zxzb5\" (UID: \"cb8c3991-e127-4b45-a7b8-f925c224c597\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.399869 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8c3991-e127-4b45-a7b8-f925c224c597-config-volume\") pod \"collect-profiles-29520690-zxzb5\" (UID: \"cb8c3991-e127-4b45-a7b8-f925c224c597\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.400835 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8c3991-e127-4b45-a7b8-f925c224c597-config-volume\") pod \"collect-profiles-29520690-zxzb5\" (UID: \"cb8c3991-e127-4b45-a7b8-f925c224c597\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.405878 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb8c3991-e127-4b45-a7b8-f925c224c597-secret-volume\") pod \"collect-profiles-29520690-zxzb5\" (UID: \"cb8c3991-e127-4b45-a7b8-f925c224c597\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.424475 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6hqx\" (UniqueName: \"kubernetes.io/projected/cb8c3991-e127-4b45-a7b8-f925c224c597-kube-api-access-t6hqx\") pod \"collect-profiles-29520690-zxzb5\" (UID: \"cb8c3991-e127-4b45-a7b8-f925c224c597\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.478309 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" Feb 16 11:30:00 crc kubenswrapper[4797]: I0216 11:30:00.988190 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5"] Feb 16 11:30:01 crc kubenswrapper[4797]: I0216 11:30:01.429964 4797 generic.go:334] "Generic (PLEG): container finished" podID="cb8c3991-e127-4b45-a7b8-f925c224c597" containerID="4a71cdeaa15b3207eb2bcb8aea6d493544e82180ffd0a163134fcb06b3078091" exitCode=0 Feb 16 11:30:01 crc kubenswrapper[4797]: I0216 11:30:01.430759 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" event={"ID":"cb8c3991-e127-4b45-a7b8-f925c224c597","Type":"ContainerDied","Data":"4a71cdeaa15b3207eb2bcb8aea6d493544e82180ffd0a163134fcb06b3078091"} Feb 16 11:30:01 crc kubenswrapper[4797]: I0216 11:30:01.430830 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" event={"ID":"cb8c3991-e127-4b45-a7b8-f925c224c597","Type":"ContainerStarted","Data":"6a2a3ee58ddbcd782a18e55a65b097020e9c6d530a4d665cf6f3b023cd8bda67"} Feb 16 11:30:02 crc kubenswrapper[4797]: I0216 11:30:02.879461 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" Feb 16 11:30:02 crc kubenswrapper[4797]: I0216 11:30:02.975990 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb8c3991-e127-4b45-a7b8-f925c224c597-secret-volume\") pod \"cb8c3991-e127-4b45-a7b8-f925c224c597\" (UID: \"cb8c3991-e127-4b45-a7b8-f925c224c597\") " Feb 16 11:30:02 crc kubenswrapper[4797]: I0216 11:30:02.976121 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8c3991-e127-4b45-a7b8-f925c224c597-config-volume\") pod \"cb8c3991-e127-4b45-a7b8-f925c224c597\" (UID: \"cb8c3991-e127-4b45-a7b8-f925c224c597\") " Feb 16 11:30:02 crc kubenswrapper[4797]: I0216 11:30:02.976208 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6hqx\" (UniqueName: \"kubernetes.io/projected/cb8c3991-e127-4b45-a7b8-f925c224c597-kube-api-access-t6hqx\") pod \"cb8c3991-e127-4b45-a7b8-f925c224c597\" (UID: \"cb8c3991-e127-4b45-a7b8-f925c224c597\") " Feb 16 11:30:02 crc kubenswrapper[4797]: I0216 11:30:02.976855 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8c3991-e127-4b45-a7b8-f925c224c597-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb8c3991-e127-4b45-a7b8-f925c224c597" (UID: "cb8c3991-e127-4b45-a7b8-f925c224c597"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:30:02 crc kubenswrapper[4797]: I0216 11:30:02.982017 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8c3991-e127-4b45-a7b8-f925c224c597-kube-api-access-t6hqx" (OuterVolumeSpecName: "kube-api-access-t6hqx") pod "cb8c3991-e127-4b45-a7b8-f925c224c597" (UID: "cb8c3991-e127-4b45-a7b8-f925c224c597"). InnerVolumeSpecName "kube-api-access-t6hqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:30:02 crc kubenswrapper[4797]: I0216 11:30:02.982370 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb8c3991-e127-4b45-a7b8-f925c224c597-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cb8c3991-e127-4b45-a7b8-f925c224c597" (UID: "cb8c3991-e127-4b45-a7b8-f925c224c597"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:30:03 crc kubenswrapper[4797]: I0216 11:30:03.078473 4797 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb8c3991-e127-4b45-a7b8-f925c224c597-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 11:30:03 crc kubenswrapper[4797]: I0216 11:30:03.078511 4797 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8c3991-e127-4b45-a7b8-f925c224c597-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 11:30:03 crc kubenswrapper[4797]: I0216 11:30:03.078522 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6hqx\" (UniqueName: \"kubernetes.io/projected/cb8c3991-e127-4b45-a7b8-f925c224c597-kube-api-access-t6hqx\") on node \"crc\" DevicePath \"\"" Feb 16 11:30:03 crc kubenswrapper[4797]: I0216 11:30:03.449901 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" event={"ID":"cb8c3991-e127-4b45-a7b8-f925c224c597","Type":"ContainerDied","Data":"6a2a3ee58ddbcd782a18e55a65b097020e9c6d530a4d665cf6f3b023cd8bda67"} Feb 16 11:30:03 crc kubenswrapper[4797]: I0216 11:30:03.449942 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a2a3ee58ddbcd782a18e55a65b097020e9c6d530a4d665cf6f3b023cd8bda67" Feb 16 11:30:03 crc kubenswrapper[4797]: I0216 11:30:03.449943 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520690-zxzb5" Feb 16 11:30:11 crc kubenswrapper[4797]: I0216 11:30:11.703723 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:30:11 crc kubenswrapper[4797]: I0216 11:30:11.704324 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:30:11 crc kubenswrapper[4797]: E0216 11:30:11.985303 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:30:21 crc kubenswrapper[4797]: I0216 11:30:21.815047 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h6x5g"] Feb 16 11:30:21 crc kubenswrapper[4797]: E0216 11:30:21.816145 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8c3991-e127-4b45-a7b8-f925c224c597" containerName="collect-profiles" Feb 16 11:30:21 crc kubenswrapper[4797]: I0216 11:30:21.816163 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8c3991-e127-4b45-a7b8-f925c224c597" containerName="collect-profiles" Feb 16 11:30:21 crc kubenswrapper[4797]: I0216 11:30:21.816457 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8c3991-e127-4b45-a7b8-f925c224c597" containerName="collect-profiles" Feb 16 11:30:21 crc kubenswrapper[4797]: I0216 11:30:21.819387 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:21 crc kubenswrapper[4797]: I0216 11:30:21.835511 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6x5g"] Feb 16 11:30:21 crc kubenswrapper[4797]: I0216 11:30:21.982616 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5541bbe3-117c-4332-a51a-10bc1e12a363-catalog-content\") pod \"redhat-operators-h6x5g\" (UID: \"5541bbe3-117c-4332-a51a-10bc1e12a363\") " pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:21 crc kubenswrapper[4797]: I0216 11:30:21.983360 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl8b7\" (UniqueName: \"kubernetes.io/projected/5541bbe3-117c-4332-a51a-10bc1e12a363-kube-api-access-tl8b7\") pod \"redhat-operators-h6x5g\" (UID: \"5541bbe3-117c-4332-a51a-10bc1e12a363\") " pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:21 crc kubenswrapper[4797]: I0216 11:30:21.983502 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5541bbe3-117c-4332-a51a-10bc1e12a363-utilities\") pod \"redhat-operators-h6x5g\" (UID: \"5541bbe3-117c-4332-a51a-10bc1e12a363\") " pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:22 crc kubenswrapper[4797]: I0216 11:30:22.085021 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5541bbe3-117c-4332-a51a-10bc1e12a363-catalog-content\") pod \"redhat-operators-h6x5g\" (UID: \"5541bbe3-117c-4332-a51a-10bc1e12a363\") " pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:22 crc kubenswrapper[4797]: I0216 11:30:22.085108 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl8b7\" (UniqueName: \"kubernetes.io/projected/5541bbe3-117c-4332-a51a-10bc1e12a363-kube-api-access-tl8b7\") pod \"redhat-operators-h6x5g\" (UID: \"5541bbe3-117c-4332-a51a-10bc1e12a363\") " pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:22 crc kubenswrapper[4797]: I0216 11:30:22.085142 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5541bbe3-117c-4332-a51a-10bc1e12a363-utilities\") pod \"redhat-operators-h6x5g\" (UID: \"5541bbe3-117c-4332-a51a-10bc1e12a363\") " pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:22 crc kubenswrapper[4797]: I0216 11:30:22.085561 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5541bbe3-117c-4332-a51a-10bc1e12a363-utilities\") pod \"redhat-operators-h6x5g\" (UID: \"5541bbe3-117c-4332-a51a-10bc1e12a363\") " pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:22 crc kubenswrapper[4797]: I0216 11:30:22.086622 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5541bbe3-117c-4332-a51a-10bc1e12a363-catalog-content\") pod \"redhat-operators-h6x5g\" (UID: \"5541bbe3-117c-4332-a51a-10bc1e12a363\") " pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:22 crc kubenswrapper[4797]: I0216 11:30:22.109864 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl8b7\" (UniqueName: \"kubernetes.io/projected/5541bbe3-117c-4332-a51a-10bc1e12a363-kube-api-access-tl8b7\") pod \"redhat-operators-h6x5g\" (UID: \"5541bbe3-117c-4332-a51a-10bc1e12a363\") " pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:22 crc kubenswrapper[4797]: I0216 11:30:22.152152 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:22 crc kubenswrapper[4797]: I0216 11:30:22.631362 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6x5g"] Feb 16 11:30:22 crc kubenswrapper[4797]: I0216 11:30:22.653185 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6x5g" event={"ID":"5541bbe3-117c-4332-a51a-10bc1e12a363","Type":"ContainerStarted","Data":"fc1e70716b10f91686eaff586096c44714d4eb69d8448774e82e2da30c744c1c"} Feb 16 11:30:22 crc kubenswrapper[4797]: E0216 11:30:22.984414 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:30:23 crc kubenswrapper[4797]: I0216 11:30:23.673078 4797 generic.go:334] "Generic (PLEG): container finished" podID="5541bbe3-117c-4332-a51a-10bc1e12a363" containerID="aa2c53f5b7f8776a894ec31f1c99f22fab6940564bb95e012b686df5320d6452" exitCode=0 Feb 16 11:30:23 crc kubenswrapper[4797]: I0216 11:30:23.673138 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6x5g" event={"ID":"5541bbe3-117c-4332-a51a-10bc1e12a363","Type":"ContainerDied","Data":"aa2c53f5b7f8776a894ec31f1c99f22fab6940564bb95e012b686df5320d6452"} Feb 16 11:30:24 crc kubenswrapper[4797]: I0216 11:30:24.687438 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6x5g" event={"ID":"5541bbe3-117c-4332-a51a-10bc1e12a363","Type":"ContainerStarted","Data":"e5b53a392970eea6e895b4f28a827279b2ee72e14da4733cf639bbc4e3ea1b53"} Feb 16 11:30:27 crc kubenswrapper[4797]: I0216 11:30:27.722893 4797 generic.go:334] "Generic (PLEG): container finished" podID="5541bbe3-117c-4332-a51a-10bc1e12a363" containerID="e5b53a392970eea6e895b4f28a827279b2ee72e14da4733cf639bbc4e3ea1b53" exitCode=0 Feb 16 11:30:27 crc kubenswrapper[4797]: I0216 11:30:27.722973 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6x5g" event={"ID":"5541bbe3-117c-4332-a51a-10bc1e12a363","Type":"ContainerDied","Data":"e5b53a392970eea6e895b4f28a827279b2ee72e14da4733cf639bbc4e3ea1b53"} Feb 16 11:30:28 crc kubenswrapper[4797]: I0216 11:30:28.739316 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6x5g" event={"ID":"5541bbe3-117c-4332-a51a-10bc1e12a363","Type":"ContainerStarted","Data":"85014d03ab5e66f72827bbfaa8fd2b1dea08beb8362a99ef4cbcf841c98d0d05"} Feb 16 11:30:28 crc kubenswrapper[4797]: I0216 11:30:28.772924 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h6x5g" podStartSLOduration=3.296609497 podStartE2EDuration="7.772891324s" podCreationTimestamp="2026-02-16 11:30:21 +0000 UTC" firstStartedPulling="2026-02-16 11:30:23.675523966 +0000 UTC m=+1418.395708956" lastFinishedPulling="2026-02-16 11:30:28.151805803 +0000 UTC m=+1422.871990783" observedRunningTime="2026-02-16 11:30:28.762961703 +0000 UTC m=+1423.483146693" watchObservedRunningTime="2026-02-16 11:30:28.772891324 +0000 UTC m=+1423.493076344" Feb 16 11:30:32 crc kubenswrapper[4797]: I0216 11:30:32.153605 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:32 crc kubenswrapper[4797]: I0216 11:30:32.153963 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:33 crc kubenswrapper[4797]: I0216 11:30:33.198532 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h6x5g" podUID="5541bbe3-117c-4332-a51a-10bc1e12a363" containerName="registry-server" probeResult="failure" output=< Feb 16 11:30:33 crc kubenswrapper[4797]: timeout: failed to connect service ":50051" within 1s Feb 16 11:30:33 crc kubenswrapper[4797]: > Feb 16 11:30:33 crc kubenswrapper[4797]: E0216 11:30:33.984914 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:30:41 crc kubenswrapper[4797]: I0216 11:30:41.703944 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:30:41 crc kubenswrapper[4797]: I0216 11:30:41.704689 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:30:41 crc kubenswrapper[4797]: I0216 11:30:41.704747 4797 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:30:41 crc kubenswrapper[4797]: I0216 11:30:41.707957 4797 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4adc9a4c2c83159ff79fb70bc5ca20f35b8b3e0651a933445365cd8743b4c78b"} pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:30:41 crc kubenswrapper[4797]: I0216 11:30:41.708082 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" containerID="cri-o://4adc9a4c2c83159ff79fb70bc5ca20f35b8b3e0651a933445365cd8743b4c78b" gracePeriod=600 Feb 16 11:30:41 crc kubenswrapper[4797]: I0216 11:30:41.882054 4797 generic.go:334] "Generic (PLEG): container finished" podID="128f4e85-fd17-4281-97d2-872fda792b21" containerID="4adc9a4c2c83159ff79fb70bc5ca20f35b8b3e0651a933445365cd8743b4c78b" exitCode=0 Feb 16 11:30:41 crc kubenswrapper[4797]: I0216 11:30:41.882118 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerDied","Data":"4adc9a4c2c83159ff79fb70bc5ca20f35b8b3e0651a933445365cd8743b4c78b"} Feb 16 11:30:41 crc kubenswrapper[4797]: I0216 11:30:41.882178 4797 scope.go:117] "RemoveContainer" containerID="ba3093423333884d09bb1138cadcee536dc44a6bdfca7536ddc371719d3f0a4a" Feb 16 11:30:41 crc kubenswrapper[4797]: E0216 11:30:41.969327 4797 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod128f4e85_fd17_4281_97d2_872fda792b21.slice/crio-conmon-4adc9a4c2c83159ff79fb70bc5ca20f35b8b3e0651a933445365cd8743b4c78b.scope\": RecentStats: unable to find data in memory cache]" Feb 16 11:30:42 crc kubenswrapper[4797]: I0216 11:30:42.206229 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:42 crc kubenswrapper[4797]: I0216 11:30:42.279258 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:42 crc kubenswrapper[4797]: I0216 11:30:42.446836 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6x5g"] Feb 16 11:30:42 crc kubenswrapper[4797]: I0216 11:30:42.897162 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5"} Feb 16 11:30:43 crc kubenswrapper[4797]: I0216 11:30:43.909389 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h6x5g" podUID="5541bbe3-117c-4332-a51a-10bc1e12a363" containerName="registry-server" containerID="cri-o://85014d03ab5e66f72827bbfaa8fd2b1dea08beb8362a99ef4cbcf841c98d0d05" gracePeriod=2 Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.415741 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.558285 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5541bbe3-117c-4332-a51a-10bc1e12a363-utilities\") pod \"5541bbe3-117c-4332-a51a-10bc1e12a363\" (UID: \"5541bbe3-117c-4332-a51a-10bc1e12a363\") " Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.558524 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5541bbe3-117c-4332-a51a-10bc1e12a363-catalog-content\") pod \"5541bbe3-117c-4332-a51a-10bc1e12a363\" (UID: \"5541bbe3-117c-4332-a51a-10bc1e12a363\") " Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.558556 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl8b7\" (UniqueName: \"kubernetes.io/projected/5541bbe3-117c-4332-a51a-10bc1e12a363-kube-api-access-tl8b7\") pod \"5541bbe3-117c-4332-a51a-10bc1e12a363\" (UID: \"5541bbe3-117c-4332-a51a-10bc1e12a363\") " Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.560956 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5541bbe3-117c-4332-a51a-10bc1e12a363-utilities" (OuterVolumeSpecName: "utilities") pod "5541bbe3-117c-4332-a51a-10bc1e12a363" (UID: "5541bbe3-117c-4332-a51a-10bc1e12a363"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.566859 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5541bbe3-117c-4332-a51a-10bc1e12a363-kube-api-access-tl8b7" (OuterVolumeSpecName: "kube-api-access-tl8b7") pod "5541bbe3-117c-4332-a51a-10bc1e12a363" (UID: "5541bbe3-117c-4332-a51a-10bc1e12a363"). InnerVolumeSpecName "kube-api-access-tl8b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.661131 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl8b7\" (UniqueName: \"kubernetes.io/projected/5541bbe3-117c-4332-a51a-10bc1e12a363-kube-api-access-tl8b7\") on node \"crc\" DevicePath \"\"" Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.661168 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5541bbe3-117c-4332-a51a-10bc1e12a363-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.703501 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5541bbe3-117c-4332-a51a-10bc1e12a363-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5541bbe3-117c-4332-a51a-10bc1e12a363" (UID: "5541bbe3-117c-4332-a51a-10bc1e12a363"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.763129 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5541bbe3-117c-4332-a51a-10bc1e12a363-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.922945 4797 generic.go:334] "Generic (PLEG): container finished" podID="5541bbe3-117c-4332-a51a-10bc1e12a363" containerID="85014d03ab5e66f72827bbfaa8fd2b1dea08beb8362a99ef4cbcf841c98d0d05" exitCode=0 Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.923002 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6x5g" Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.923021 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6x5g" event={"ID":"5541bbe3-117c-4332-a51a-10bc1e12a363","Type":"ContainerDied","Data":"85014d03ab5e66f72827bbfaa8fd2b1dea08beb8362a99ef4cbcf841c98d0d05"} Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.923317 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6x5g" event={"ID":"5541bbe3-117c-4332-a51a-10bc1e12a363","Type":"ContainerDied","Data":"fc1e70716b10f91686eaff586096c44714d4eb69d8448774e82e2da30c744c1c"} Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.923340 4797 scope.go:117] "RemoveContainer" containerID="85014d03ab5e66f72827bbfaa8fd2b1dea08beb8362a99ef4cbcf841c98d0d05" Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.946650 4797 scope.go:117] "RemoveContainer" containerID="e5b53a392970eea6e895b4f28a827279b2ee72e14da4733cf639bbc4e3ea1b53" Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.970342 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6x5g"] Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.982337 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h6x5g"] Feb 16 11:30:44 crc kubenswrapper[4797]: I0216 11:30:44.997730 4797 scope.go:117] "RemoveContainer" containerID="aa2c53f5b7f8776a894ec31f1c99f22fab6940564bb95e012b686df5320d6452" Feb 16 11:30:45 crc kubenswrapper[4797]: I0216 11:30:45.047430 4797 scope.go:117] "RemoveContainer" containerID="85014d03ab5e66f72827bbfaa8fd2b1dea08beb8362a99ef4cbcf841c98d0d05" Feb 16 11:30:45 crc kubenswrapper[4797]: E0216 11:30:45.048081 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85014d03ab5e66f72827bbfaa8fd2b1dea08beb8362a99ef4cbcf841c98d0d05\": container with ID starting with 85014d03ab5e66f72827bbfaa8fd2b1dea08beb8362a99ef4cbcf841c98d0d05 not found: ID does not exist" containerID="85014d03ab5e66f72827bbfaa8fd2b1dea08beb8362a99ef4cbcf841c98d0d05" Feb 16 11:30:45 crc kubenswrapper[4797]: I0216 11:30:45.048135 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85014d03ab5e66f72827bbfaa8fd2b1dea08beb8362a99ef4cbcf841c98d0d05"} err="failed to get container status \"85014d03ab5e66f72827bbfaa8fd2b1dea08beb8362a99ef4cbcf841c98d0d05\": rpc error: code = NotFound desc = could not find container \"85014d03ab5e66f72827bbfaa8fd2b1dea08beb8362a99ef4cbcf841c98d0d05\": container with ID starting with 85014d03ab5e66f72827bbfaa8fd2b1dea08beb8362a99ef4cbcf841c98d0d05 not found: ID does not exist" Feb 16 11:30:45 crc kubenswrapper[4797]: I0216 11:30:45.048162 4797 scope.go:117] "RemoveContainer" containerID="e5b53a392970eea6e895b4f28a827279b2ee72e14da4733cf639bbc4e3ea1b53" Feb 16 11:30:45 crc kubenswrapper[4797]: E0216 11:30:45.048464 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5b53a392970eea6e895b4f28a827279b2ee72e14da4733cf639bbc4e3ea1b53\": container with ID starting with e5b53a392970eea6e895b4f28a827279b2ee72e14da4733cf639bbc4e3ea1b53 not found: ID does not exist" containerID="e5b53a392970eea6e895b4f28a827279b2ee72e14da4733cf639bbc4e3ea1b53" Feb 16 11:30:45 crc kubenswrapper[4797]: I0216 11:30:45.048493 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5b53a392970eea6e895b4f28a827279b2ee72e14da4733cf639bbc4e3ea1b53"} err="failed to get container status \"e5b53a392970eea6e895b4f28a827279b2ee72e14da4733cf639bbc4e3ea1b53\": rpc error: code = NotFound desc = could not find container \"e5b53a392970eea6e895b4f28a827279b2ee72e14da4733cf639bbc4e3ea1b53\": container with ID starting with e5b53a392970eea6e895b4f28a827279b2ee72e14da4733cf639bbc4e3ea1b53 not found: ID does not exist" Feb 16 11:30:45 crc kubenswrapper[4797]: I0216 11:30:45.048514 4797 scope.go:117] "RemoveContainer" containerID="aa2c53f5b7f8776a894ec31f1c99f22fab6940564bb95e012b686df5320d6452" Feb 16 11:30:45 crc kubenswrapper[4797]: E0216 11:30:45.048803 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa2c53f5b7f8776a894ec31f1c99f22fab6940564bb95e012b686df5320d6452\": container with ID starting with aa2c53f5b7f8776a894ec31f1c99f22fab6940564bb95e012b686df5320d6452 not found: ID does not exist" containerID="aa2c53f5b7f8776a894ec31f1c99f22fab6940564bb95e012b686df5320d6452" Feb 16 11:30:45 crc kubenswrapper[4797]: I0216 11:30:45.048831 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa2c53f5b7f8776a894ec31f1c99f22fab6940564bb95e012b686df5320d6452"} err="failed to get container status \"aa2c53f5b7f8776a894ec31f1c99f22fab6940564bb95e012b686df5320d6452\": rpc error: code = NotFound desc = could not find container \"aa2c53f5b7f8776a894ec31f1c99f22fab6940564bb95e012b686df5320d6452\": container with ID starting with aa2c53f5b7f8776a894ec31f1c99f22fab6940564bb95e012b686df5320d6452 not found: ID does not exist" Feb 16 11:30:45 crc kubenswrapper[4797]: I0216 11:30:45.996105 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5541bbe3-117c-4332-a51a-10bc1e12a363" path="/var/lib/kubelet/pods/5541bbe3-117c-4332-a51a-10bc1e12a363/volumes" Feb 16 11:30:48 crc kubenswrapper[4797]: E0216 11:30:48.986118 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:31:00 crc kubenswrapper[4797]: E0216 11:31:00.986805 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:31:03 crc kubenswrapper[4797]: I0216 11:31:03.156358 4797 scope.go:117] "RemoveContainer" containerID="7c17efbf5337bfe587e030f97c9abb8c4eb0225078a45264183c4d629bc8d0e8" Feb 16 11:31:03 crc kubenswrapper[4797]: I0216 11:31:03.183705 4797 scope.go:117] "RemoveContainer" containerID="aa8addfefb65f4141009154f91dea0df9fec4e6c78cb2bb340e0de3bf13759c3" Feb 16 11:31:12 crc kubenswrapper[4797]: E0216 11:31:12.110750 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:31:12 crc kubenswrapper[4797]: E0216 11:31:12.111306 4797 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:31:12 crc kubenswrapper[4797]: E0216 11:31:12.111443 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fvxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dhgrw_openstack(895bed8d-c376-47ad-8fa6-3cf0f07399c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 11:31:12 crc kubenswrapper[4797]: E0216 11:31:12.112807 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:31:24 crc kubenswrapper[4797]: E0216 11:31:24.985433 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.081135 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x6qsk"] Feb 16 11:31:34 crc kubenswrapper[4797]: E0216 11:31:34.082166 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5541bbe3-117c-4332-a51a-10bc1e12a363" containerName="extract-content" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.082182 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5541bbe3-117c-4332-a51a-10bc1e12a363" containerName="extract-content" Feb 16 11:31:34 crc kubenswrapper[4797]: E0216 11:31:34.082246 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5541bbe3-117c-4332-a51a-10bc1e12a363" containerName="extract-utilities" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.082254 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5541bbe3-117c-4332-a51a-10bc1e12a363" containerName="extract-utilities" Feb 16 11:31:34 crc kubenswrapper[4797]: E0216 11:31:34.082304 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5541bbe3-117c-4332-a51a-10bc1e12a363" containerName="registry-server" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.082313 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5541bbe3-117c-4332-a51a-10bc1e12a363" containerName="registry-server" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.082528 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="5541bbe3-117c-4332-a51a-10bc1e12a363" containerName="registry-server" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.084386 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.091931 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6qsk"] Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.172539 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-utilities\") pod \"redhat-marketplace-x6qsk\" (UID: \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\") " pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.172805 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prkrd\" (UniqueName: \"kubernetes.io/projected/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-kube-api-access-prkrd\") pod \"redhat-marketplace-x6qsk\" (UID: \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\") " pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.173137 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-catalog-content\") pod \"redhat-marketplace-x6qsk\" (UID: \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\") " pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.275241 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prkrd\" (UniqueName: \"kubernetes.io/projected/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-kube-api-access-prkrd\") pod \"redhat-marketplace-x6qsk\" (UID: \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\") " pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.275380 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-catalog-content\") pod \"redhat-marketplace-x6qsk\" (UID: \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\") " pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.275464 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-utilities\") pod \"redhat-marketplace-x6qsk\" (UID: \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\") " pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.275989 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-catalog-content\") pod \"redhat-marketplace-x6qsk\" (UID: \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\") " pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.275994 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-utilities\") pod \"redhat-marketplace-x6qsk\" (UID: \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\") " pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.295232 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prkrd\" (UniqueName: \"kubernetes.io/projected/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-kube-api-access-prkrd\") pod \"redhat-marketplace-x6qsk\" (UID: \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\") " pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.421645 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:34 crc kubenswrapper[4797]: I0216 11:31:34.882514 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6qsk"] Feb 16 11:31:34 crc kubenswrapper[4797]: W0216 11:31:34.889533 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cd7d105_1a64_4369_8536_20b9ed1a8d6c.slice/crio-94226b47c5d56d2326a6ce598eb29a151c3fbf033d37576a94b3c93ee112d6c8 WatchSource:0}: Error finding container 94226b47c5d56d2326a6ce598eb29a151c3fbf033d37576a94b3c93ee112d6c8: Status 404 returned error can't find the container with id 94226b47c5d56d2326a6ce598eb29a151c3fbf033d37576a94b3c93ee112d6c8 Feb 16 11:31:35 crc kubenswrapper[4797]: I0216 11:31:35.495206 4797 generic.go:334] "Generic (PLEG): container finished" podID="6cd7d105-1a64-4369-8536-20b9ed1a8d6c" containerID="b41792be66d54b9c84ae1cc18c9179b2e096b419816007393445f002b895ab70" exitCode=0 Feb 16 11:31:35 crc kubenswrapper[4797]: I0216 11:31:35.495301 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6qsk" event={"ID":"6cd7d105-1a64-4369-8536-20b9ed1a8d6c","Type":"ContainerDied","Data":"b41792be66d54b9c84ae1cc18c9179b2e096b419816007393445f002b895ab70"} Feb 16 11:31:35 crc kubenswrapper[4797]: I0216 11:31:35.495488 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6qsk" event={"ID":"6cd7d105-1a64-4369-8536-20b9ed1a8d6c","Type":"ContainerStarted","Data":"94226b47c5d56d2326a6ce598eb29a151c3fbf033d37576a94b3c93ee112d6c8"} Feb 16 11:31:37 crc kubenswrapper[4797]: I0216 11:31:37.526619 4797 generic.go:334] "Generic (PLEG): container finished" podID="6cd7d105-1a64-4369-8536-20b9ed1a8d6c" containerID="16d33d6bb56c101e0981b38650d30534e910694ad13370ceb517565dcae33393" exitCode=0 Feb 16 11:31:37 crc kubenswrapper[4797]: I0216 11:31:37.526682 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6qsk" event={"ID":"6cd7d105-1a64-4369-8536-20b9ed1a8d6c","Type":"ContainerDied","Data":"16d33d6bb56c101e0981b38650d30534e910694ad13370ceb517565dcae33393"} Feb 16 11:31:37 crc kubenswrapper[4797]: E0216 11:31:37.986864 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:31:38 crc kubenswrapper[4797]: I0216 11:31:38.539252 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6qsk" event={"ID":"6cd7d105-1a64-4369-8536-20b9ed1a8d6c","Type":"ContainerStarted","Data":"70e9e96c026cc1a0b975ca756320010d23fca29c144f2664c94be5b1cb66a860"} Feb 16 11:31:38 crc kubenswrapper[4797]: I0216 11:31:38.568900 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x6qsk" podStartSLOduration=2.133512554 podStartE2EDuration="4.568879958s" podCreationTimestamp="2026-02-16 11:31:34 +0000 UTC" firstStartedPulling="2026-02-16 11:31:35.497654158 +0000 UTC m=+1490.217839138" lastFinishedPulling="2026-02-16 11:31:37.933021552 +0000 UTC m=+1492.653206542" observedRunningTime="2026-02-16 11:31:38.562540484 +0000 UTC m=+1493.282725494" watchObservedRunningTime="2026-02-16 11:31:38.568879958 +0000 UTC m=+1493.289064948" Feb 16 11:31:44 crc kubenswrapper[4797]: I0216 11:31:44.422734 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:44 crc kubenswrapper[4797]: I0216 11:31:44.423251 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:44 crc kubenswrapper[4797]: I0216 11:31:44.481236 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:44 crc kubenswrapper[4797]: I0216 11:31:44.675187 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:44 crc kubenswrapper[4797]: I0216 11:31:44.730413 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6qsk"] Feb 16 11:31:46 crc kubenswrapper[4797]: I0216 11:31:46.629101 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x6qsk" podUID="6cd7d105-1a64-4369-8536-20b9ed1a8d6c" containerName="registry-server" containerID="cri-o://70e9e96c026cc1a0b975ca756320010d23fca29c144f2664c94be5b1cb66a860" gracePeriod=2 Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.155793 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.276289 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-catalog-content\") pod \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\" (UID: \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\") " Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.276414 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-utilities\") pod \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\" (UID: \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\") " Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.276532 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prkrd\" (UniqueName: \"kubernetes.io/projected/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-kube-api-access-prkrd\") pod \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\" (UID: \"6cd7d105-1a64-4369-8536-20b9ed1a8d6c\") " Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.277277 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-utilities" (OuterVolumeSpecName: "utilities") pod "6cd7d105-1a64-4369-8536-20b9ed1a8d6c" (UID: "6cd7d105-1a64-4369-8536-20b9ed1a8d6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.277417 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.282076 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-kube-api-access-prkrd" (OuterVolumeSpecName: "kube-api-access-prkrd") pod "6cd7d105-1a64-4369-8536-20b9ed1a8d6c" (UID: "6cd7d105-1a64-4369-8536-20b9ed1a8d6c"). InnerVolumeSpecName "kube-api-access-prkrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.379502 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prkrd\" (UniqueName: \"kubernetes.io/projected/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-kube-api-access-prkrd\") on node \"crc\" DevicePath \"\"" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.400465 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cd7d105-1a64-4369-8536-20b9ed1a8d6c" (UID: "6cd7d105-1a64-4369-8536-20b9ed1a8d6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.481303 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cd7d105-1a64-4369-8536-20b9ed1a8d6c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.639687 4797 generic.go:334] "Generic (PLEG): container finished" podID="6cd7d105-1a64-4369-8536-20b9ed1a8d6c" containerID="70e9e96c026cc1a0b975ca756320010d23fca29c144f2664c94be5b1cb66a860" exitCode=0 Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.639741 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6qsk" event={"ID":"6cd7d105-1a64-4369-8536-20b9ed1a8d6c","Type":"ContainerDied","Data":"70e9e96c026cc1a0b975ca756320010d23fca29c144f2664c94be5b1cb66a860"} Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.639771 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x6qsk" event={"ID":"6cd7d105-1a64-4369-8536-20b9ed1a8d6c","Type":"ContainerDied","Data":"94226b47c5d56d2326a6ce598eb29a151c3fbf033d37576a94b3c93ee112d6c8"} Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.639792 4797 scope.go:117] "RemoveContainer" containerID="70e9e96c026cc1a0b975ca756320010d23fca29c144f2664c94be5b1cb66a860" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.639940 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x6qsk" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.670148 4797 scope.go:117] "RemoveContainer" containerID="16d33d6bb56c101e0981b38650d30534e910694ad13370ceb517565dcae33393" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.679913 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6qsk"] Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.701425 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x6qsk"] Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.707699 4797 scope.go:117] "RemoveContainer" containerID="b41792be66d54b9c84ae1cc18c9179b2e096b419816007393445f002b895ab70" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.754906 4797 scope.go:117] "RemoveContainer" containerID="70e9e96c026cc1a0b975ca756320010d23fca29c144f2664c94be5b1cb66a860" Feb 16 11:31:47 crc kubenswrapper[4797]: E0216 11:31:47.755708 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70e9e96c026cc1a0b975ca756320010d23fca29c144f2664c94be5b1cb66a860\": container with ID starting with 70e9e96c026cc1a0b975ca756320010d23fca29c144f2664c94be5b1cb66a860 not found: ID does not exist" containerID="70e9e96c026cc1a0b975ca756320010d23fca29c144f2664c94be5b1cb66a860" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.755772 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70e9e96c026cc1a0b975ca756320010d23fca29c144f2664c94be5b1cb66a860"} err="failed to get container status \"70e9e96c026cc1a0b975ca756320010d23fca29c144f2664c94be5b1cb66a860\": rpc error: code = NotFound desc = could not find container \"70e9e96c026cc1a0b975ca756320010d23fca29c144f2664c94be5b1cb66a860\": container with ID starting with 70e9e96c026cc1a0b975ca756320010d23fca29c144f2664c94be5b1cb66a860 not found: ID does not exist" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.755819 4797 scope.go:117] "RemoveContainer" containerID="16d33d6bb56c101e0981b38650d30534e910694ad13370ceb517565dcae33393" Feb 16 11:31:47 crc kubenswrapper[4797]: E0216 11:31:47.756109 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16d33d6bb56c101e0981b38650d30534e910694ad13370ceb517565dcae33393\": container with ID starting with 16d33d6bb56c101e0981b38650d30534e910694ad13370ceb517565dcae33393 not found: ID does not exist" containerID="16d33d6bb56c101e0981b38650d30534e910694ad13370ceb517565dcae33393" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.756136 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16d33d6bb56c101e0981b38650d30534e910694ad13370ceb517565dcae33393"} err="failed to get container status \"16d33d6bb56c101e0981b38650d30534e910694ad13370ceb517565dcae33393\": rpc error: code = NotFound desc = could not find container \"16d33d6bb56c101e0981b38650d30534e910694ad13370ceb517565dcae33393\": container with ID starting with 16d33d6bb56c101e0981b38650d30534e910694ad13370ceb517565dcae33393 not found: ID does not exist" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.756157 4797 scope.go:117] "RemoveContainer" containerID="b41792be66d54b9c84ae1cc18c9179b2e096b419816007393445f002b895ab70" Feb 16 11:31:47 crc kubenswrapper[4797]: E0216 11:31:47.756460 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41792be66d54b9c84ae1cc18c9179b2e096b419816007393445f002b895ab70\": container with ID starting with b41792be66d54b9c84ae1cc18c9179b2e096b419816007393445f002b895ab70 not found: ID does not exist" containerID="b41792be66d54b9c84ae1cc18c9179b2e096b419816007393445f002b895ab70" Feb 16 11:31:47 crc kubenswrapper[4797]: I0216 11:31:47.756515 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41792be66d54b9c84ae1cc18c9179b2e096b419816007393445f002b895ab70"} err="failed to get container status \"b41792be66d54b9c84ae1cc18c9179b2e096b419816007393445f002b895ab70\": rpc error: code = NotFound desc = could not find container \"b41792be66d54b9c84ae1cc18c9179b2e096b419816007393445f002b895ab70\": container with ID starting with b41792be66d54b9c84ae1cc18c9179b2e096b419816007393445f002b895ab70 not found: ID does not exist" Feb 16 11:31:48 crc kubenswrapper[4797]: I0216 11:31:48.008391 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd7d105-1a64-4369-8536-20b9ed1a8d6c" path="/var/lib/kubelet/pods/6cd7d105-1a64-4369-8536-20b9ed1a8d6c/volumes" Feb 16 11:31:51 crc kubenswrapper[4797]: E0216 11:31:51.984614 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:31:59 crc kubenswrapper[4797]: I0216 11:31:59.939431 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wxgwz"] Feb 16 11:31:59 crc kubenswrapper[4797]: E0216 11:31:59.941432 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd7d105-1a64-4369-8536-20b9ed1a8d6c" containerName="extract-utilities" Feb 16 11:31:59 crc kubenswrapper[4797]: I0216 11:31:59.941452 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd7d105-1a64-4369-8536-20b9ed1a8d6c" containerName="extract-utilities" Feb 16 11:31:59 crc kubenswrapper[4797]: E0216 11:31:59.941485 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd7d105-1a64-4369-8536-20b9ed1a8d6c" containerName="extract-content" Feb 16 11:31:59 crc kubenswrapper[4797]: I0216 11:31:59.941494 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd7d105-1a64-4369-8536-20b9ed1a8d6c" containerName="extract-content" Feb 16 11:31:59 crc kubenswrapper[4797]: E0216 11:31:59.941519 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cd7d105-1a64-4369-8536-20b9ed1a8d6c" containerName="registry-server" Feb 16 11:31:59 crc kubenswrapper[4797]: I0216 11:31:59.941528 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd7d105-1a64-4369-8536-20b9ed1a8d6c" containerName="registry-server" Feb 16 11:31:59 crc kubenswrapper[4797]: I0216 11:31:59.941844 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cd7d105-1a64-4369-8536-20b9ed1a8d6c" containerName="registry-server" Feb 16 11:31:59 crc kubenswrapper[4797]: I0216 11:31:59.943865 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:31:59 crc kubenswrapper[4797]: I0216 11:31:59.952253 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxgwz"] Feb 16 11:32:00 crc kubenswrapper[4797]: I0216 11:32:00.068953 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574073c9-c13c-4ad2-b47a-760adba55b52-utilities\") pod \"certified-operators-wxgwz\" (UID: \"574073c9-c13c-4ad2-b47a-760adba55b52\") " pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:00 crc kubenswrapper[4797]: I0216 11:32:00.069033 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574073c9-c13c-4ad2-b47a-760adba55b52-catalog-content\") pod \"certified-operators-wxgwz\" (UID: \"574073c9-c13c-4ad2-b47a-760adba55b52\") " pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:00 crc kubenswrapper[4797]: I0216 11:32:00.069233 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb2pg\" (UniqueName: \"kubernetes.io/projected/574073c9-c13c-4ad2-b47a-760adba55b52-kube-api-access-bb2pg\") pod \"certified-operators-wxgwz\" (UID: \"574073c9-c13c-4ad2-b47a-760adba55b52\") " pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:00 crc kubenswrapper[4797]: I0216 11:32:00.171537 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574073c9-c13c-4ad2-b47a-760adba55b52-utilities\") pod \"certified-operators-wxgwz\" (UID: \"574073c9-c13c-4ad2-b47a-760adba55b52\") " pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:00 crc kubenswrapper[4797]: I0216 11:32:00.171663 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574073c9-c13c-4ad2-b47a-760adba55b52-catalog-content\") pod \"certified-operators-wxgwz\" (UID: \"574073c9-c13c-4ad2-b47a-760adba55b52\") " pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:00 crc kubenswrapper[4797]: I0216 11:32:00.171793 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb2pg\" (UniqueName: \"kubernetes.io/projected/574073c9-c13c-4ad2-b47a-760adba55b52-kube-api-access-bb2pg\") pod \"certified-operators-wxgwz\" (UID: \"574073c9-c13c-4ad2-b47a-760adba55b52\") " pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:00 crc kubenswrapper[4797]: I0216 11:32:00.172441 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574073c9-c13c-4ad2-b47a-760adba55b52-utilities\") pod \"certified-operators-wxgwz\" (UID: \"574073c9-c13c-4ad2-b47a-760adba55b52\") " pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:00 crc kubenswrapper[4797]: I0216 11:32:00.172441 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574073c9-c13c-4ad2-b47a-760adba55b52-catalog-content\") pod \"certified-operators-wxgwz\" (UID: \"574073c9-c13c-4ad2-b47a-760adba55b52\") " pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:00 crc kubenswrapper[4797]: I0216 11:32:00.191691 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb2pg\" (UniqueName: \"kubernetes.io/projected/574073c9-c13c-4ad2-b47a-760adba55b52-kube-api-access-bb2pg\") pod \"certified-operators-wxgwz\" (UID: \"574073c9-c13c-4ad2-b47a-760adba55b52\") " pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:00 crc kubenswrapper[4797]: I0216 11:32:00.262427 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:00 crc kubenswrapper[4797]: I0216 11:32:00.782681 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wxgwz"] Feb 16 11:32:01 crc kubenswrapper[4797]: I0216 11:32:01.815422 4797 generic.go:334] "Generic (PLEG): container finished" podID="574073c9-c13c-4ad2-b47a-760adba55b52" containerID="46bc0b23ff86be5312a6e469f5b5d74130c762973e2fe0a976c7300eeb65a880" exitCode=0 Feb 16 11:32:01 crc kubenswrapper[4797]: I0216 11:32:01.815552 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxgwz" event={"ID":"574073c9-c13c-4ad2-b47a-760adba55b52","Type":"ContainerDied","Data":"46bc0b23ff86be5312a6e469f5b5d74130c762973e2fe0a976c7300eeb65a880"} Feb 16 11:32:01 crc kubenswrapper[4797]: I0216 11:32:01.815733 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxgwz" event={"ID":"574073c9-c13c-4ad2-b47a-760adba55b52","Type":"ContainerStarted","Data":"e98f42c15f23c512ebe2b3308ab1226610762a96e8ef61b63520ffb1764d673c"} Feb 16 11:32:01 crc kubenswrapper[4797]: I0216 11:32:01.820888 4797 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 11:32:03 crc kubenswrapper[4797]: I0216 11:32:03.313641 4797 scope.go:117] "RemoveContainer" containerID="a04639c9b7a5ebdecd4efe75e843728a89325d33c478251558b69d63ebc7114b" Feb 16 11:32:03 crc kubenswrapper[4797]: I0216 11:32:03.839119 4797 generic.go:334] "Generic (PLEG): container finished" podID="574073c9-c13c-4ad2-b47a-760adba55b52" containerID="2665e6bf633595385cb54c0b764cd790f5261944b6de1ca0f839d09726a048c3" exitCode=0 Feb 16 11:32:03 crc kubenswrapper[4797]: I0216 11:32:03.839177 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxgwz" event={"ID":"574073c9-c13c-4ad2-b47a-760adba55b52","Type":"ContainerDied","Data":"2665e6bf633595385cb54c0b764cd790f5261944b6de1ca0f839d09726a048c3"} Feb 16 11:32:03 crc kubenswrapper[4797]: E0216 11:32:03.983657 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:32:04 crc kubenswrapper[4797]: I0216 11:32:04.874226 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxgwz" event={"ID":"574073c9-c13c-4ad2-b47a-760adba55b52","Type":"ContainerStarted","Data":"5e08e7e8afe22c4b2b032fc1842fd7afa78faf17b13cbd7fef148704080e9d3c"} Feb 16 11:32:04 crc kubenswrapper[4797]: I0216 11:32:04.904758 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wxgwz" podStartSLOduration=3.467818204 podStartE2EDuration="5.904739019s" podCreationTimestamp="2026-02-16 11:31:59 +0000 UTC" firstStartedPulling="2026-02-16 11:32:01.820254207 +0000 UTC m=+1516.540439217" lastFinishedPulling="2026-02-16 11:32:04.257175022 +0000 UTC m=+1518.977360032" observedRunningTime="2026-02-16 11:32:04.893987054 +0000 UTC m=+1519.614172054" watchObservedRunningTime="2026-02-16 11:32:04.904739019 +0000 UTC m=+1519.624923999" Feb 16 11:32:10 crc kubenswrapper[4797]: I0216 11:32:10.262703 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:10 crc kubenswrapper[4797]: I0216 11:32:10.263355 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:10 crc kubenswrapper[4797]: I0216 11:32:10.320891 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:11 crc kubenswrapper[4797]: I0216 11:32:11.009341 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:11 crc kubenswrapper[4797]: I0216 11:32:11.067386 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wxgwz"] Feb 16 11:32:12 crc kubenswrapper[4797]: I0216 11:32:12.978352 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wxgwz" podUID="574073c9-c13c-4ad2-b47a-760adba55b52" containerName="registry-server" containerID="cri-o://5e08e7e8afe22c4b2b032fc1842fd7afa78faf17b13cbd7fef148704080e9d3c" gracePeriod=2 Feb 16 11:32:13 crc kubenswrapper[4797]: I0216 11:32:13.536203 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:13 crc kubenswrapper[4797]: I0216 11:32:13.699313 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574073c9-c13c-4ad2-b47a-760adba55b52-catalog-content\") pod \"574073c9-c13c-4ad2-b47a-760adba55b52\" (UID: \"574073c9-c13c-4ad2-b47a-760adba55b52\") " Feb 16 11:32:13 crc kubenswrapper[4797]: I0216 11:32:13.699437 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574073c9-c13c-4ad2-b47a-760adba55b52-utilities\") pod \"574073c9-c13c-4ad2-b47a-760adba55b52\" (UID: \"574073c9-c13c-4ad2-b47a-760adba55b52\") " Feb 16 11:32:13 crc kubenswrapper[4797]: I0216 11:32:13.699608 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bb2pg\" (UniqueName: \"kubernetes.io/projected/574073c9-c13c-4ad2-b47a-760adba55b52-kube-api-access-bb2pg\") pod \"574073c9-c13c-4ad2-b47a-760adba55b52\" (UID: \"574073c9-c13c-4ad2-b47a-760adba55b52\") " Feb 16 11:32:13 crc kubenswrapper[4797]: I0216 11:32:13.700181 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/574073c9-c13c-4ad2-b47a-760adba55b52-utilities" (OuterVolumeSpecName: "utilities") pod "574073c9-c13c-4ad2-b47a-760adba55b52" (UID: "574073c9-c13c-4ad2-b47a-760adba55b52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:32:13 crc kubenswrapper[4797]: I0216 11:32:13.802638 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574073c9-c13c-4ad2-b47a-760adba55b52-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:32:13 crc kubenswrapper[4797]: I0216 11:32:13.993797 4797 generic.go:334] "Generic (PLEG): container finished" podID="574073c9-c13c-4ad2-b47a-760adba55b52" containerID="5e08e7e8afe22c4b2b032fc1842fd7afa78faf17b13cbd7fef148704080e9d3c" exitCode=0 Feb 16 11:32:13 crc kubenswrapper[4797]: I0216 11:32:13.994141 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wxgwz" Feb 16 11:32:13 crc kubenswrapper[4797]: I0216 11:32:13.995703 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxgwz" event={"ID":"574073c9-c13c-4ad2-b47a-760adba55b52","Type":"ContainerDied","Data":"5e08e7e8afe22c4b2b032fc1842fd7afa78faf17b13cbd7fef148704080e9d3c"} Feb 16 11:32:13 crc kubenswrapper[4797]: I0216 11:32:13.995725 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wxgwz" event={"ID":"574073c9-c13c-4ad2-b47a-760adba55b52","Type":"ContainerDied","Data":"e98f42c15f23c512ebe2b3308ab1226610762a96e8ef61b63520ffb1764d673c"} Feb 16 11:32:13 crc kubenswrapper[4797]: I0216 11:32:13.995741 4797 scope.go:117] "RemoveContainer" containerID="5e08e7e8afe22c4b2b032fc1842fd7afa78faf17b13cbd7fef148704080e9d3c" Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.019488 4797 scope.go:117] "RemoveContainer" containerID="2665e6bf633595385cb54c0b764cd790f5261944b6de1ca0f839d09726a048c3" Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.215853 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574073c9-c13c-4ad2-b47a-760adba55b52-kube-api-access-bb2pg" (OuterVolumeSpecName: "kube-api-access-bb2pg") pod "574073c9-c13c-4ad2-b47a-760adba55b52" (UID: "574073c9-c13c-4ad2-b47a-760adba55b52"). InnerVolumeSpecName "kube-api-access-bb2pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.252649 4797 scope.go:117] "RemoveContainer" containerID="46bc0b23ff86be5312a6e469f5b5d74130c762973e2fe0a976c7300eeb65a880" Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.317429 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bb2pg\" (UniqueName: \"kubernetes.io/projected/574073c9-c13c-4ad2-b47a-760adba55b52-kube-api-access-bb2pg\") on node \"crc\" DevicePath \"\"" Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.319847 4797 scope.go:117] "RemoveContainer" containerID="5e08e7e8afe22c4b2b032fc1842fd7afa78faf17b13cbd7fef148704080e9d3c" Feb 16 11:32:14 crc kubenswrapper[4797]: E0216 11:32:14.320566 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e08e7e8afe22c4b2b032fc1842fd7afa78faf17b13cbd7fef148704080e9d3c\": container with ID starting with 5e08e7e8afe22c4b2b032fc1842fd7afa78faf17b13cbd7fef148704080e9d3c not found: ID does not exist" containerID="5e08e7e8afe22c4b2b032fc1842fd7afa78faf17b13cbd7fef148704080e9d3c" Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.320623 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e08e7e8afe22c4b2b032fc1842fd7afa78faf17b13cbd7fef148704080e9d3c"} err="failed to get container status \"5e08e7e8afe22c4b2b032fc1842fd7afa78faf17b13cbd7fef148704080e9d3c\": rpc error: code = NotFound desc = could not find container \"5e08e7e8afe22c4b2b032fc1842fd7afa78faf17b13cbd7fef148704080e9d3c\": container with ID starting with 5e08e7e8afe22c4b2b032fc1842fd7afa78faf17b13cbd7fef148704080e9d3c not found: ID does not exist" Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.320651 4797 scope.go:117] "RemoveContainer" containerID="2665e6bf633595385cb54c0b764cd790f5261944b6de1ca0f839d09726a048c3" Feb 16 11:32:14 crc kubenswrapper[4797]: E0216 11:32:14.321904 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2665e6bf633595385cb54c0b764cd790f5261944b6de1ca0f839d09726a048c3\": container with ID starting with 2665e6bf633595385cb54c0b764cd790f5261944b6de1ca0f839d09726a048c3 not found: ID does not exist" containerID="2665e6bf633595385cb54c0b764cd790f5261944b6de1ca0f839d09726a048c3" Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.321969 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2665e6bf633595385cb54c0b764cd790f5261944b6de1ca0f839d09726a048c3"} err="failed to get container status \"2665e6bf633595385cb54c0b764cd790f5261944b6de1ca0f839d09726a048c3\": rpc error: code = NotFound desc = could not find container \"2665e6bf633595385cb54c0b764cd790f5261944b6de1ca0f839d09726a048c3\": container with ID starting with 2665e6bf633595385cb54c0b764cd790f5261944b6de1ca0f839d09726a048c3 not found: ID does not exist" Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.321997 4797 scope.go:117] "RemoveContainer" containerID="46bc0b23ff86be5312a6e469f5b5d74130c762973e2fe0a976c7300eeb65a880" Feb 16 11:32:14 crc kubenswrapper[4797]: E0216 11:32:14.322292 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46bc0b23ff86be5312a6e469f5b5d74130c762973e2fe0a976c7300eeb65a880\": container with ID starting with 46bc0b23ff86be5312a6e469f5b5d74130c762973e2fe0a976c7300eeb65a880 not found: ID does not exist" containerID="46bc0b23ff86be5312a6e469f5b5d74130c762973e2fe0a976c7300eeb65a880" Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.322321 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46bc0b23ff86be5312a6e469f5b5d74130c762973e2fe0a976c7300eeb65a880"} err="failed to get container status \"46bc0b23ff86be5312a6e469f5b5d74130c762973e2fe0a976c7300eeb65a880\": rpc error: code = NotFound desc = could not find container \"46bc0b23ff86be5312a6e469f5b5d74130c762973e2fe0a976c7300eeb65a880\": container with ID starting with 46bc0b23ff86be5312a6e469f5b5d74130c762973e2fe0a976c7300eeb65a880 not found: ID does not exist" Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.359399 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/574073c9-c13c-4ad2-b47a-760adba55b52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "574073c9-c13c-4ad2-b47a-760adba55b52" (UID: "574073c9-c13c-4ad2-b47a-760adba55b52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.419279 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574073c9-c13c-4ad2-b47a-760adba55b52-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.637892 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wxgwz"] Feb 16 11:32:14 crc kubenswrapper[4797]: I0216 11:32:14.652231 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wxgwz"] Feb 16 11:32:15 crc kubenswrapper[4797]: I0216 11:32:15.994264 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574073c9-c13c-4ad2-b47a-760adba55b52" path="/var/lib/kubelet/pods/574073c9-c13c-4ad2-b47a-760adba55b52/volumes" Feb 16 11:32:16 crc kubenswrapper[4797]: E0216 11:32:16.985682 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:32:28 crc kubenswrapper[4797]: E0216 11:32:28.985599 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:32:43 crc kubenswrapper[4797]: E0216 11:32:43.986763 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:32:58 crc kubenswrapper[4797]: E0216 11:32:58.986126 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:33:03 crc kubenswrapper[4797]: I0216 11:33:03.408528 4797 scope.go:117] "RemoveContainer" containerID="61b0101d153c0a1978f9b731c493f10ade2e6d712b3b9ee6f5fa6f2b0547916e" Feb 16 11:33:03 crc kubenswrapper[4797]: I0216 11:33:03.447323 4797 scope.go:117] "RemoveContainer" containerID="56d10d8514d89b841ab2c5557e81b8cf55e9197e50a392c6084db042d8e5161c" Feb 16 11:33:03 crc kubenswrapper[4797]: I0216 11:33:03.472725 4797 scope.go:117] "RemoveContainer" containerID="068392753fe07fe9efc2d75cfdef54388401fa6ff9d164599a854dbaeb33fb60" Feb 16 11:33:03 crc kubenswrapper[4797]: I0216 11:33:03.499831 4797 scope.go:117] "RemoveContainer" containerID="2b9b55ead49dd0329c8b513566c2c3243b20c84378a7145d82cac73c7e1f245a" Feb 16 11:33:03 crc kubenswrapper[4797]: I0216 11:33:03.520878 4797 scope.go:117] "RemoveContainer" containerID="9309714b13cc4ae0e604631d6cb4f4f2a2e22140f414242db05755f7b894689b" Feb 16 11:33:03 crc kubenswrapper[4797]: I0216 11:33:03.544441 4797 scope.go:117] "RemoveContainer" containerID="0b44b44e4ec5a8f588432c89ba3976eba18143ce5194edf83079a6351b89e0a1" Feb 16 11:33:09 crc kubenswrapper[4797]: E0216 11:33:09.985598 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:33:11 crc kubenswrapper[4797]: I0216 11:33:11.896899 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:33:11 crc kubenswrapper[4797]: I0216 11:33:11.897317 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:33:24 crc kubenswrapper[4797]: E0216 11:33:24.985045 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:33:35 crc kubenswrapper[4797]: E0216 11:33:35.996665 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:33:41 crc kubenswrapper[4797]: I0216 11:33:41.703785 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:33:41 crc kubenswrapper[4797]: I0216 11:33:41.704403 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:33:50 crc kubenswrapper[4797]: E0216 11:33:50.986855 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:34:03 crc kubenswrapper[4797]: I0216 11:34:03.633461 4797 scope.go:117] "RemoveContainer" containerID="2374e0b87252c78fc1a3717a35a9256f740d933a57dbf1750bc32f70a3ac2f71" Feb 16 11:34:03 crc kubenswrapper[4797]: I0216 11:34:03.657474 4797 scope.go:117] "RemoveContainer" containerID="5841bc64475aa0c59a4fcc46ad6533f52c0b6cc2f4c6ba6eb7ac20093aca835b" Feb 16 11:34:03 crc kubenswrapper[4797]: I0216 11:34:03.706692 4797 scope.go:117] "RemoveContainer" containerID="d47cca45405da8d9352199490ddecb8520d703efaef276cde64f167eed2decd2" Feb 16 11:34:05 crc kubenswrapper[4797]: E0216 11:34:05.992809 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:34:11 crc kubenswrapper[4797]: I0216 11:34:11.703606 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:34:11 crc kubenswrapper[4797]: I0216 11:34:11.704518 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:34:11 crc kubenswrapper[4797]: I0216 11:34:11.704571 4797 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:34:11 crc kubenswrapper[4797]: I0216 11:34:11.705469 4797 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5"} pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:34:11 crc kubenswrapper[4797]: I0216 11:34:11.705544 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" containerID="cri-o://aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" gracePeriod=600 Feb 16 11:34:11 crc kubenswrapper[4797]: E0216 11:34:11.838933 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:34:12 crc kubenswrapper[4797]: I0216 11:34:12.258433 4797 generic.go:334] "Generic (PLEG): container finished" podID="128f4e85-fd17-4281-97d2-872fda792b21" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" exitCode=0 Feb 16 11:34:12 crc kubenswrapper[4797]: I0216 11:34:12.258486 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerDied","Data":"aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5"} Feb 16 11:34:12 crc kubenswrapper[4797]: I0216 11:34:12.258527 4797 scope.go:117] "RemoveContainer" containerID="4adc9a4c2c83159ff79fb70bc5ca20f35b8b3e0651a933445365cd8743b4c78b" Feb 16 11:34:12 crc kubenswrapper[4797]: I0216 11:34:12.259689 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:34:12 crc kubenswrapper[4797]: E0216 11:34:12.260429 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:34:18 crc kubenswrapper[4797]: E0216 11:34:18.984543 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:34:23 crc kubenswrapper[4797]: I0216 11:34:23.982965 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:34:23 crc kubenswrapper[4797]: E0216 11:34:23.983811 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:34:32 crc kubenswrapper[4797]: E0216 11:34:32.986115 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:34:37 crc kubenswrapper[4797]: I0216 11:34:37.985840 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:34:37 crc kubenswrapper[4797]: E0216 11:34:37.986431 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:34:44 crc kubenswrapper[4797]: E0216 11:34:44.985351 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:34:51 crc kubenswrapper[4797]: I0216 11:34:51.983897 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:34:51 crc kubenswrapper[4797]: E0216 11:34:51.985360 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:34:56 crc kubenswrapper[4797]: I0216 11:34:56.054787 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-spl6l"] Feb 16 11:34:56 crc kubenswrapper[4797]: I0216 11:34:56.071424 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ebfd-account-create-update-b8xtr"] Feb 16 11:34:56 crc kubenswrapper[4797]: I0216 11:34:56.086956 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ebfd-account-create-update-b8xtr"] Feb 16 11:34:56 crc kubenswrapper[4797]: I0216 11:34:56.098174 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-spl6l"] Feb 16 11:34:56 crc kubenswrapper[4797]: E0216 11:34:56.989178 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:34:57 crc kubenswrapper[4797]: I0216 11:34:57.043657 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-xfrtt"] Feb 16 11:34:57 crc kubenswrapper[4797]: I0216 11:34:57.063516 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-xfrtt"] Feb 16 11:34:57 crc kubenswrapper[4797]: I0216 11:34:57.076450 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e09a-account-create-update-8scsc"] Feb 16 11:34:57 crc kubenswrapper[4797]: I0216 11:34:57.085381 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e09a-account-create-update-8scsc"] Feb 16 11:34:58 crc kubenswrapper[4797]: I0216 11:34:57.999922 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6491fced-b625-48df-a033-29cd854a45da" path="/var/lib/kubelet/pods/6491fced-b625-48df-a033-29cd854a45da/volumes" Feb 16 11:34:58 crc kubenswrapper[4797]: I0216 11:34:58.001334 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73dbbcd9-7ef8-4a40-ad11-d3f6de830711" path="/var/lib/kubelet/pods/73dbbcd9-7ef8-4a40-ad11-d3f6de830711/volumes" Feb 16 11:34:58 crc kubenswrapper[4797]: I0216 11:34:58.002356 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="979573d1-ce03-454a-9d96-94372635c0cd" path="/var/lib/kubelet/pods/979573d1-ce03-454a-9d96-94372635c0cd/volumes" Feb 16 11:34:58 crc kubenswrapper[4797]: I0216 11:34:58.003354 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b75828bf-9dfb-4337-9ac4-710a7fbb62db" path="/var/lib/kubelet/pods/b75828bf-9dfb-4337-9ac4-710a7fbb62db/volumes" Feb 16 11:34:59 crc kubenswrapper[4797]: I0216 11:34:59.025823 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-j46lb"] Feb 16 11:34:59 crc kubenswrapper[4797]: I0216 11:34:59.036468 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-j46lb"] Feb 16 11:34:59 crc kubenswrapper[4797]: I0216 11:34:59.993366 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2435c436-da01-4acc-a193-7f1337ece1ef" path="/var/lib/kubelet/pods/2435c436-da01-4acc-a193-7f1337ece1ef/volumes" Feb 16 11:35:00 crc kubenswrapper[4797]: I0216 11:35:00.040093 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a443-account-create-update-dqvgt"] Feb 16 11:35:00 crc kubenswrapper[4797]: I0216 11:35:00.051375 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a443-account-create-update-dqvgt"] Feb 16 11:35:01 crc kubenswrapper[4797]: I0216 11:35:01.995523 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67122d2a-58c7-48a5-893b-1ad4382838eb" path="/var/lib/kubelet/pods/67122d2a-58c7-48a5-893b-1ad4382838eb/volumes" Feb 16 11:35:03 crc kubenswrapper[4797]: I0216 11:35:03.786203 4797 scope.go:117] "RemoveContainer" containerID="a9e03302729a7df71a82c9c25e74f00a943ebdf097fd45694a73f40f411024eb" Feb 16 11:35:03 crc kubenswrapper[4797]: I0216 11:35:03.810121 4797 scope.go:117] "RemoveContainer" containerID="e669249122d87a91f8b4e07b07fecea9be861741461be519295aaf4e0b45ab21" Feb 16 11:35:03 crc kubenswrapper[4797]: I0216 11:35:03.883455 4797 scope.go:117] "RemoveContainer" containerID="1e5ca7f38eaad9b767f59004d6270f9592320de7f66cc7456315450815de9b19" Feb 16 11:35:03 crc kubenswrapper[4797]: I0216 11:35:03.964596 4797 scope.go:117] "RemoveContainer" containerID="6821a9a9bde0fb4eb1048144df3a82a0ddfb570e9b0d71eb7dd970f5c9b7be8c" Feb 16 11:35:04 crc kubenswrapper[4797]: I0216 11:35:04.001556 4797 scope.go:117] "RemoveContainer" containerID="d30e1b2abb360f4a19e1cab67d7af73ee4d0bab6554dc10f666948c08e7a7978" Feb 16 11:35:04 crc kubenswrapper[4797]: I0216 11:35:04.055079 4797 scope.go:117] "RemoveContainer" containerID="f59edb93b2aa7f23639dfca5bcf7c71513ed5f21c66d1b63e52079dfd59f9399" Feb 16 11:35:04 crc kubenswrapper[4797]: I0216 11:35:04.984091 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:35:04 crc kubenswrapper[4797]: E0216 11:35:04.984455 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:35:09 crc kubenswrapper[4797]: E0216 11:35:09.985434 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:35:11 crc kubenswrapper[4797]: I0216 11:35:11.029852 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-a1eb-account-create-update-d8ls4"] Feb 16 11:35:11 crc kubenswrapper[4797]: I0216 11:35:11.041148 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-hlkwq"] Feb 16 11:35:11 crc kubenswrapper[4797]: I0216 11:35:11.050227 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-a1eb-account-create-update-d8ls4"] Feb 16 11:35:11 crc kubenswrapper[4797]: I0216 11:35:11.057836 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-hlkwq"] Feb 16 11:35:12 crc kubenswrapper[4797]: I0216 11:35:12.016936 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8455a08f-921f-44b1-a66b-b8ac256526d9" path="/var/lib/kubelet/pods/8455a08f-921f-44b1-a66b-b8ac256526d9/volumes" Feb 16 11:35:12 crc kubenswrapper[4797]: I0216 11:35:12.018572 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca5b2e47-863e-424d-9dd6-d8ed4b9e518e" path="/var/lib/kubelet/pods/ca5b2e47-863e-424d-9dd6-d8ed4b9e518e/volumes" Feb 16 11:35:12 crc kubenswrapper[4797]: I0216 11:35:12.033474 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-pmnbl"] Feb 16 11:35:12 crc kubenswrapper[4797]: I0216 11:35:12.043817 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-pmnbl"] Feb 16 11:35:13 crc kubenswrapper[4797]: I0216 11:35:13.087292 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-zfrh2"] Feb 16 11:35:13 crc kubenswrapper[4797]: I0216 11:35:13.097202 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-t99zf"] Feb 16 11:35:13 crc kubenswrapper[4797]: I0216 11:35:13.115975 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7460-account-create-update-87l9f"] Feb 16 11:35:13 crc kubenswrapper[4797]: I0216 11:35:13.129147 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-zfrh2"] Feb 16 11:35:13 crc kubenswrapper[4797]: I0216 11:35:13.140295 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7460-account-create-update-87l9f"] Feb 16 11:35:13 crc kubenswrapper[4797]: I0216 11:35:13.148023 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-a80b-account-create-update-swbm2"] Feb 16 11:35:13 crc kubenswrapper[4797]: I0216 11:35:13.170006 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-t99zf"] Feb 16 11:35:13 crc kubenswrapper[4797]: I0216 11:35:13.174883 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-5266-account-create-update-bkcfj"] Feb 16 11:35:13 crc kubenswrapper[4797]: I0216 11:35:13.184670 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-a80b-account-create-update-swbm2"] Feb 16 11:35:13 crc kubenswrapper[4797]: I0216 11:35:13.210091 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-fk95f"] Feb 16 11:35:13 crc kubenswrapper[4797]: I0216 11:35:13.210171 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-5266-account-create-update-bkcfj"] Feb 16 11:35:13 crc kubenswrapper[4797]: I0216 11:35:13.210190 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-fk95f"] Feb 16 11:35:14 crc kubenswrapper[4797]: I0216 11:35:14.003268 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6650dd6b-74e9-407a-8690-6845e881427f" path="/var/lib/kubelet/pods/6650dd6b-74e9-407a-8690-6845e881427f/volumes" Feb 16 11:35:14 crc kubenswrapper[4797]: I0216 11:35:14.005111 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e57501-f0cf-48c7-831e-d6782b7c1037" path="/var/lib/kubelet/pods/67e57501-f0cf-48c7-831e-d6782b7c1037/volumes" Feb 16 11:35:14 crc kubenswrapper[4797]: I0216 11:35:14.006788 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e" path="/var/lib/kubelet/pods/74f1a3eb-a2b3-4df1-9ff8-0dd525ea746e/volumes" Feb 16 11:35:14 crc kubenswrapper[4797]: I0216 11:35:14.008432 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa1b2e9-8e1b-4a90-aae6-49476b717d71" path="/var/lib/kubelet/pods/afa1b2e9-8e1b-4a90-aae6-49476b717d71/volumes" Feb 16 11:35:14 crc kubenswrapper[4797]: I0216 11:35:14.011272 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b289e9a1-2299-4c30-8a6a-ac125a3342ca" path="/var/lib/kubelet/pods/b289e9a1-2299-4c30-8a6a-ac125a3342ca/volumes" Feb 16 11:35:14 crc kubenswrapper[4797]: I0216 11:35:14.013127 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d13f8337-bf62-4444-bb7b-fbb9699373d4" path="/var/lib/kubelet/pods/d13f8337-bf62-4444-bb7b-fbb9699373d4/volumes" Feb 16 11:35:14 crc kubenswrapper[4797]: I0216 11:35:14.014667 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e31bcde1-c735-4e57-907d-2876334827d6" path="/var/lib/kubelet/pods/e31bcde1-c735-4e57-907d-2876334827d6/volumes" Feb 16 11:35:15 crc kubenswrapper[4797]: I0216 11:35:15.983664 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:35:15 crc kubenswrapper[4797]: E0216 11:35:15.984532 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:35:24 crc kubenswrapper[4797]: E0216 11:35:24.984905 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:35:27 crc kubenswrapper[4797]: I0216 11:35:27.983946 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:35:27 crc kubenswrapper[4797]: E0216 11:35:27.984725 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:35:29 crc kubenswrapper[4797]: I0216 11:35:29.041122 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-gbv4g"] Feb 16 11:35:29 crc kubenswrapper[4797]: I0216 11:35:29.056122 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-gbv4g"] Feb 16 11:35:30 crc kubenswrapper[4797]: I0216 11:35:29.999891 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54f56706-9d2d-4034-ab0d-ed5023bdde18" path="/var/lib/kubelet/pods/54f56706-9d2d-4034-ab0d-ed5023bdde18/volumes" Feb 16 11:35:35 crc kubenswrapper[4797]: I0216 11:35:35.028862 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-zwkg2"] Feb 16 11:35:35 crc kubenswrapper[4797]: I0216 11:35:35.042856 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-zwkg2"] Feb 16 11:35:35 crc kubenswrapper[4797]: I0216 11:35:35.993360 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="448a4a0f-a469-415f-8dcc-6223ee884c29" path="/var/lib/kubelet/pods/448a4a0f-a469-415f-8dcc-6223ee884c29/volumes" Feb 16 11:35:38 crc kubenswrapper[4797]: E0216 11:35:38.987543 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:35:41 crc kubenswrapper[4797]: I0216 11:35:41.985359 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:35:41 crc kubenswrapper[4797]: E0216 11:35:41.986295 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:35:53 crc kubenswrapper[4797]: E0216 11:35:53.985370 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:35:54 crc kubenswrapper[4797]: I0216 11:35:54.983989 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:35:54 crc kubenswrapper[4797]: E0216 11:35:54.985176 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:35:58 crc kubenswrapper[4797]: I0216 11:35:58.043559 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-grph6"] Feb 16 11:35:58 crc kubenswrapper[4797]: I0216 11:35:58.054922 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-grph6"] Feb 16 11:35:59 crc kubenswrapper[4797]: I0216 11:35:59.993805 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc2d12e-1b1b-43cc-baad-ff26e8423891" path="/var/lib/kubelet/pods/bbc2d12e-1b1b-43cc-baad-ff26e8423891/volumes" Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.042892 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-5kmr4"] Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.058900 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-5kmr4"] Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.203863 4797 scope.go:117] "RemoveContainer" containerID="0acc9339135065381acc685e6fe8636570ad035642ea46d8da868a4fd5d9730d" Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.242179 4797 scope.go:117] "RemoveContainer" containerID="718500b1a588f795158e350d37d97282db7b75d658f9facf08b8ce3ed56b994b" Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.296593 4797 scope.go:117] "RemoveContainer" containerID="3a65263a2a7e396e956e7a22682c657dafad794a59d372110ff6b19e5b2691aa" Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.365489 4797 scope.go:117] "RemoveContainer" containerID="ec1f87adf21f050635a4a4bbc350991fa185bde0cb5f7bd4ced052beff136436" Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.390335 4797 scope.go:117] "RemoveContainer" containerID="d8a376174d977c9cae8f858563d6c7cee2850c184b870c353becf3fe2b51dcba" Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.432959 4797 scope.go:117] "RemoveContainer" containerID="d5e0ed896d60ef2405447c3bfb2ff06086da5dd11d0c951c36562714c7b6e738" Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.479567 4797 scope.go:117] "RemoveContainer" containerID="5af453300b23a63287567169a3dcbb9b7b44a69741b5dc211b1a7bc79b6b5807" Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.496563 4797 scope.go:117] "RemoveContainer" containerID="e7910a16b83089268576c8da00999f2414abb2e3fa40a8c0c211db850afc4a71" Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.522662 4797 scope.go:117] "RemoveContainer" containerID="770a6afe839bb16ba5d591bb9a43c00a3bb8b1ffdf84b36d04aa03e6352d2c57" Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.543444 4797 scope.go:117] "RemoveContainer" containerID="d8dadc3d00c78202bdaac763f8fa2ec67cd6618de8bf87d82d6536d85ffaad28" Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.561444 4797 scope.go:117] "RemoveContainer" containerID="233da805d3129489b8b04771c283b0a723e20d5423f5499fd319cf1902de9f30" Feb 16 11:36:04 crc kubenswrapper[4797]: I0216 11:36:04.581048 4797 scope.go:117] "RemoveContainer" containerID="f2a9ed58a09ed80438a40c91a5656cac8119e4dab548ee7251ce5374121429a3" Feb 16 11:36:05 crc kubenswrapper[4797]: I0216 11:36:05.046709 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-z8bpc"] Feb 16 11:36:05 crc kubenswrapper[4797]: I0216 11:36:05.060237 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4xnc7"] Feb 16 11:36:05 crc kubenswrapper[4797]: I0216 11:36:05.070850 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-z8bpc"] Feb 16 11:36:05 crc kubenswrapper[4797]: I0216 11:36:05.079673 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4xnc7"] Feb 16 11:36:06 crc kubenswrapper[4797]: I0216 11:36:06.000390 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24fea779-c008-4fda-b2d0-e3201f7dfaed" path="/var/lib/kubelet/pods/24fea779-c008-4fda-b2d0-e3201f7dfaed/volumes" Feb 16 11:36:06 crc kubenswrapper[4797]: I0216 11:36:06.001732 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f90c62-8793-4bcc-8b06-9b0b710776d7" path="/var/lib/kubelet/pods/35f90c62-8793-4bcc-8b06-9b0b710776d7/volumes" Feb 16 11:36:06 crc kubenswrapper[4797]: I0216 11:36:06.002361 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3129f86-1462-4e40-8695-e4ae737ebf5f" path="/var/lib/kubelet/pods/e3129f86-1462-4e40-8695-e4ae737ebf5f/volumes" Feb 16 11:36:06 crc kubenswrapper[4797]: E0216 11:36:06.984804 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:36:07 crc kubenswrapper[4797]: I0216 11:36:07.982724 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:36:07 crc kubenswrapper[4797]: E0216 11:36:07.983093 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:36:18 crc kubenswrapper[4797]: E0216 11:36:18.106706 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:36:18 crc kubenswrapper[4797]: E0216 11:36:18.107384 4797 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:36:18 crc kubenswrapper[4797]: E0216 11:36:18.107836 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fvxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dhgrw_openstack(895bed8d-c376-47ad-8fa6-3cf0f07399c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 11:36:18 crc kubenswrapper[4797]: E0216 11:36:18.109070 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:36:18 crc kubenswrapper[4797]: I0216 11:36:18.983269 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:36:18 crc kubenswrapper[4797]: E0216 11:36:18.983616 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:36:28 crc kubenswrapper[4797]: E0216 11:36:28.985364 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:36:30 crc kubenswrapper[4797]: I0216 11:36:30.056129 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2dfc7"] Feb 16 11:36:30 crc kubenswrapper[4797]: I0216 11:36:30.065247 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2dfc7"] Feb 16 11:36:30 crc kubenswrapper[4797]: I0216 11:36:30.982443 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:36:30 crc kubenswrapper[4797]: E0216 11:36:30.982767 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:36:31 crc kubenswrapper[4797]: I0216 11:36:31.998404 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="062948d0-fd09-4e11-904d-a346a430ee4f" path="/var/lib/kubelet/pods/062948d0-fd09-4e11-904d-a346a430ee4f/volumes" Feb 16 11:36:39 crc kubenswrapper[4797]: E0216 11:36:39.988407 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:36:42 crc kubenswrapper[4797]: I0216 11:36:42.982929 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:36:42 crc kubenswrapper[4797]: E0216 11:36:42.984021 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.215412 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-92z8n"] Feb 16 11:36:46 crc kubenswrapper[4797]: E0216 11:36:46.218468 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574073c9-c13c-4ad2-b47a-760adba55b52" containerName="registry-server" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.218510 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="574073c9-c13c-4ad2-b47a-760adba55b52" containerName="registry-server" Feb 16 11:36:46 crc kubenswrapper[4797]: E0216 11:36:46.218557 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574073c9-c13c-4ad2-b47a-760adba55b52" containerName="extract-content" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.218567 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="574073c9-c13c-4ad2-b47a-760adba55b52" containerName="extract-content" Feb 16 11:36:46 crc kubenswrapper[4797]: E0216 11:36:46.218614 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574073c9-c13c-4ad2-b47a-760adba55b52" containerName="extract-utilities" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.218623 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="574073c9-c13c-4ad2-b47a-760adba55b52" containerName="extract-utilities" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.219345 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="574073c9-c13c-4ad2-b47a-760adba55b52" containerName="registry-server" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.225852 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.269075 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-92z8n"] Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.344310 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8gqc\" (UniqueName: \"kubernetes.io/projected/eff30bff-a503-4be0-9f7d-22a650efac7f-kube-api-access-j8gqc\") pod \"community-operators-92z8n\" (UID: \"eff30bff-a503-4be0-9f7d-22a650efac7f\") " pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.344384 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff30bff-a503-4be0-9f7d-22a650efac7f-catalog-content\") pod \"community-operators-92z8n\" (UID: \"eff30bff-a503-4be0-9f7d-22a650efac7f\") " pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.344771 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff30bff-a503-4be0-9f7d-22a650efac7f-utilities\") pod \"community-operators-92z8n\" (UID: \"eff30bff-a503-4be0-9f7d-22a650efac7f\") " pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.446313 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8gqc\" (UniqueName: \"kubernetes.io/projected/eff30bff-a503-4be0-9f7d-22a650efac7f-kube-api-access-j8gqc\") pod \"community-operators-92z8n\" (UID: \"eff30bff-a503-4be0-9f7d-22a650efac7f\") " pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.446370 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff30bff-a503-4be0-9f7d-22a650efac7f-catalog-content\") pod \"community-operators-92z8n\" (UID: \"eff30bff-a503-4be0-9f7d-22a650efac7f\") " pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.446485 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff30bff-a503-4be0-9f7d-22a650efac7f-utilities\") pod \"community-operators-92z8n\" (UID: \"eff30bff-a503-4be0-9f7d-22a650efac7f\") " pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.446996 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff30bff-a503-4be0-9f7d-22a650efac7f-catalog-content\") pod \"community-operators-92z8n\" (UID: \"eff30bff-a503-4be0-9f7d-22a650efac7f\") " pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.447032 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff30bff-a503-4be0-9f7d-22a650efac7f-utilities\") pod \"community-operators-92z8n\" (UID: \"eff30bff-a503-4be0-9f7d-22a650efac7f\") " pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.468155 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8gqc\" (UniqueName: \"kubernetes.io/projected/eff30bff-a503-4be0-9f7d-22a650efac7f-kube-api-access-j8gqc\") pod \"community-operators-92z8n\" (UID: \"eff30bff-a503-4be0-9f7d-22a650efac7f\") " pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:46 crc kubenswrapper[4797]: I0216 11:36:46.565416 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:47 crc kubenswrapper[4797]: W0216 11:36:47.122692 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeff30bff_a503_4be0_9f7d_22a650efac7f.slice/crio-95a6ce5ed5ec8157d4eb813a096222f4e151d85363c2c9aafeaac9a27b8e2998 WatchSource:0}: Error finding container 95a6ce5ed5ec8157d4eb813a096222f4e151d85363c2c9aafeaac9a27b8e2998: Status 404 returned error can't find the container with id 95a6ce5ed5ec8157d4eb813a096222f4e151d85363c2c9aafeaac9a27b8e2998 Feb 16 11:36:47 crc kubenswrapper[4797]: I0216 11:36:47.123283 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-92z8n"] Feb 16 11:36:47 crc kubenswrapper[4797]: I0216 11:36:47.925646 4797 generic.go:334] "Generic (PLEG): container finished" podID="eff30bff-a503-4be0-9f7d-22a650efac7f" containerID="29a0cf6559759ef892ad560a7b344671723e9a5187b46c0b9993aab7fd5f89a0" exitCode=0 Feb 16 11:36:47 crc kubenswrapper[4797]: I0216 11:36:47.926021 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92z8n" event={"ID":"eff30bff-a503-4be0-9f7d-22a650efac7f","Type":"ContainerDied","Data":"29a0cf6559759ef892ad560a7b344671723e9a5187b46c0b9993aab7fd5f89a0"} Feb 16 11:36:47 crc kubenswrapper[4797]: I0216 11:36:47.926229 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92z8n" event={"ID":"eff30bff-a503-4be0-9f7d-22a650efac7f","Type":"ContainerStarted","Data":"95a6ce5ed5ec8157d4eb813a096222f4e151d85363c2c9aafeaac9a27b8e2998"} Feb 16 11:36:48 crc kubenswrapper[4797]: I0216 11:36:48.937100 4797 generic.go:334] "Generic (PLEG): container finished" podID="eff30bff-a503-4be0-9f7d-22a650efac7f" containerID="d58a84e2eb35a9146adf8c87919e1e7eba2eaf58017b43b0883e693a2d7d791e" exitCode=0 Feb 16 11:36:48 crc kubenswrapper[4797]: I0216 11:36:48.937227 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92z8n" event={"ID":"eff30bff-a503-4be0-9f7d-22a650efac7f","Type":"ContainerDied","Data":"d58a84e2eb35a9146adf8c87919e1e7eba2eaf58017b43b0883e693a2d7d791e"} Feb 16 11:36:49 crc kubenswrapper[4797]: I0216 11:36:49.948262 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92z8n" event={"ID":"eff30bff-a503-4be0-9f7d-22a650efac7f","Type":"ContainerStarted","Data":"b4563194d308757277c76d992c543c23dad059d43ed972f0e8099a617efc0198"} Feb 16 11:36:49 crc kubenswrapper[4797]: I0216 11:36:49.974728 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-92z8n" podStartSLOduration=2.510673015 podStartE2EDuration="3.974706329s" podCreationTimestamp="2026-02-16 11:36:46 +0000 UTC" firstStartedPulling="2026-02-16 11:36:47.929421716 +0000 UTC m=+1802.649606686" lastFinishedPulling="2026-02-16 11:36:49.39345502 +0000 UTC m=+1804.113640000" observedRunningTime="2026-02-16 11:36:49.969206028 +0000 UTC m=+1804.689391018" watchObservedRunningTime="2026-02-16 11:36:49.974706329 +0000 UTC m=+1804.694891319" Feb 16 11:36:51 crc kubenswrapper[4797]: E0216 11:36:51.984508 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:36:56 crc kubenswrapper[4797]: I0216 11:36:56.566243 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:56 crc kubenswrapper[4797]: I0216 11:36:56.566928 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:56 crc kubenswrapper[4797]: I0216 11:36:56.617290 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:56 crc kubenswrapper[4797]: I0216 11:36:56.983524 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:36:56 crc kubenswrapper[4797]: E0216 11:36:56.984477 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:36:57 crc kubenswrapper[4797]: I0216 11:36:57.065917 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:36:59 crc kubenswrapper[4797]: I0216 11:36:59.600005 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-92z8n"] Feb 16 11:36:59 crc kubenswrapper[4797]: I0216 11:36:59.600509 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-92z8n" podUID="eff30bff-a503-4be0-9f7d-22a650efac7f" containerName="registry-server" containerID="cri-o://b4563194d308757277c76d992c543c23dad059d43ed972f0e8099a617efc0198" gracePeriod=2 Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.062307 4797 generic.go:334] "Generic (PLEG): container finished" podID="eff30bff-a503-4be0-9f7d-22a650efac7f" containerID="b4563194d308757277c76d992c543c23dad059d43ed972f0e8099a617efc0198" exitCode=0 Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.062625 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92z8n" event={"ID":"eff30bff-a503-4be0-9f7d-22a650efac7f","Type":"ContainerDied","Data":"b4563194d308757277c76d992c543c23dad059d43ed972f0e8099a617efc0198"} Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.062666 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-f7hz9"] Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.071147 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2855-account-create-update-r99n9"] Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.078785 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-gbj9w"] Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.086217 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-f7hz9"] Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.094119 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2855-account-create-update-r99n9"] Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.102584 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-gbj9w"] Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.219329 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.391469 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff30bff-a503-4be0-9f7d-22a650efac7f-utilities\") pod \"eff30bff-a503-4be0-9f7d-22a650efac7f\" (UID: \"eff30bff-a503-4be0-9f7d-22a650efac7f\") " Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.391742 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff30bff-a503-4be0-9f7d-22a650efac7f-catalog-content\") pod \"eff30bff-a503-4be0-9f7d-22a650efac7f\" (UID: \"eff30bff-a503-4be0-9f7d-22a650efac7f\") " Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.392021 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8gqc\" (UniqueName: \"kubernetes.io/projected/eff30bff-a503-4be0-9f7d-22a650efac7f-kube-api-access-j8gqc\") pod \"eff30bff-a503-4be0-9f7d-22a650efac7f\" (UID: \"eff30bff-a503-4be0-9f7d-22a650efac7f\") " Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.392357 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff30bff-a503-4be0-9f7d-22a650efac7f-utilities" (OuterVolumeSpecName: "utilities") pod "eff30bff-a503-4be0-9f7d-22a650efac7f" (UID: "eff30bff-a503-4be0-9f7d-22a650efac7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.392867 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eff30bff-a503-4be0-9f7d-22a650efac7f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.397500 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff30bff-a503-4be0-9f7d-22a650efac7f-kube-api-access-j8gqc" (OuterVolumeSpecName: "kube-api-access-j8gqc") pod "eff30bff-a503-4be0-9f7d-22a650efac7f" (UID: "eff30bff-a503-4be0-9f7d-22a650efac7f"). InnerVolumeSpecName "kube-api-access-j8gqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.445299 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eff30bff-a503-4be0-9f7d-22a650efac7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eff30bff-a503-4be0-9f7d-22a650efac7f" (UID: "eff30bff-a503-4be0-9f7d-22a650efac7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.494751 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8gqc\" (UniqueName: \"kubernetes.io/projected/eff30bff-a503-4be0-9f7d-22a650efac7f-kube-api-access-j8gqc\") on node \"crc\" DevicePath \"\"" Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.494793 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eff30bff-a503-4be0-9f7d-22a650efac7f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.997707 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d74122-c09d-42b3-9cd9-cf4a4a0b16cb" path="/var/lib/kubelet/pods/63d74122-c09d-42b3-9cd9-cf4a4a0b16cb/volumes" Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.998561 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="881f97b5-ecc4-4032-97f0-5cd87b06d39e" path="/var/lib/kubelet/pods/881f97b5-ecc4-4032-97f0-5cd87b06d39e/volumes" Feb 16 11:37:01 crc kubenswrapper[4797]: I0216 11:37:01.999438 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa2e02e2-0ab2-4f24-8bae-e8613132a219" path="/var/lib/kubelet/pods/fa2e02e2-0ab2-4f24-8bae-e8613132a219/volumes" Feb 16 11:37:02 crc kubenswrapper[4797]: I0216 11:37:02.045704 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-7lsx9"] Feb 16 11:37:02 crc kubenswrapper[4797]: I0216 11:37:02.056417 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-355b-account-create-update-c2vkr"] Feb 16 11:37:02 crc kubenswrapper[4797]: I0216 11:37:02.066353 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4e32-account-create-update-zj54l"] Feb 16 11:37:02 crc kubenswrapper[4797]: I0216 11:37:02.075307 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92z8n" event={"ID":"eff30bff-a503-4be0-9f7d-22a650efac7f","Type":"ContainerDied","Data":"95a6ce5ed5ec8157d4eb813a096222f4e151d85363c2c9aafeaac9a27b8e2998"} Feb 16 11:37:02 crc kubenswrapper[4797]: I0216 11:37:02.075380 4797 scope.go:117] "RemoveContainer" containerID="b4563194d308757277c76d992c543c23dad059d43ed972f0e8099a617efc0198" Feb 16 11:37:02 crc kubenswrapper[4797]: I0216 11:37:02.075395 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92z8n" Feb 16 11:37:02 crc kubenswrapper[4797]: I0216 11:37:02.082890 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-7lsx9"] Feb 16 11:37:02 crc kubenswrapper[4797]: I0216 11:37:02.099142 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-355b-account-create-update-c2vkr"] Feb 16 11:37:02 crc kubenswrapper[4797]: I0216 11:37:02.109374 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4e32-account-create-update-zj54l"] Feb 16 11:37:02 crc kubenswrapper[4797]: I0216 11:37:02.111253 4797 scope.go:117] "RemoveContainer" containerID="d58a84e2eb35a9146adf8c87919e1e7eba2eaf58017b43b0883e693a2d7d791e" Feb 16 11:37:02 crc kubenswrapper[4797]: I0216 11:37:02.124062 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-92z8n"] Feb 16 11:37:02 crc kubenswrapper[4797]: I0216 11:37:02.128184 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-92z8n"] Feb 16 11:37:02 crc kubenswrapper[4797]: I0216 11:37:02.132058 4797 scope.go:117] "RemoveContainer" containerID="29a0cf6559759ef892ad560a7b344671723e9a5187b46c0b9993aab7fd5f89a0" Feb 16 11:37:03 crc kubenswrapper[4797]: I0216 11:37:03.997629 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="343f74dd-7004-4231-aa8a-381eda1790b5" path="/var/lib/kubelet/pods/343f74dd-7004-4231-aa8a-381eda1790b5/volumes" Feb 16 11:37:03 crc kubenswrapper[4797]: I0216 11:37:03.998661 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="557afa65-539f-4a13-9817-34714c8dd21d" path="/var/lib/kubelet/pods/557afa65-539f-4a13-9817-34714c8dd21d/volumes" Feb 16 11:37:03 crc kubenswrapper[4797]: I0216 11:37:03.999457 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db36174d-c724-45b7-a3a0-528fb5539864" path="/var/lib/kubelet/pods/db36174d-c724-45b7-a3a0-528fb5539864/volumes" Feb 16 11:37:04 crc kubenswrapper[4797]: I0216 11:37:04.000242 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eff30bff-a503-4be0-9f7d-22a650efac7f" path="/var/lib/kubelet/pods/eff30bff-a503-4be0-9f7d-22a650efac7f/volumes" Feb 16 11:37:04 crc kubenswrapper[4797]: I0216 11:37:04.830550 4797 scope.go:117] "RemoveContainer" containerID="d7174d30e707071360b3ecfd552669901fdbdecfc6491397a01b003dbce53888" Feb 16 11:37:04 crc kubenswrapper[4797]: I0216 11:37:04.858384 4797 scope.go:117] "RemoveContainer" containerID="445198b22303290a64416d26d63969ce6ab88bfa4a565134ec5bea1c726106a2" Feb 16 11:37:04 crc kubenswrapper[4797]: I0216 11:37:04.911300 4797 scope.go:117] "RemoveContainer" containerID="f9c72f7c72989ba1042bc78ffd959886c6a5b24ba4907bdaef12fd75c7cf2d15" Feb 16 11:37:04 crc kubenswrapper[4797]: I0216 11:37:04.951994 4797 scope.go:117] "RemoveContainer" containerID="ead106d265fee7368ebd34864028cdba49a23285426e71c60e8aad6dc35e7e9f" Feb 16 11:37:04 crc kubenswrapper[4797]: E0216 11:37:04.985026 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:37:05 crc kubenswrapper[4797]: I0216 11:37:05.023502 4797 scope.go:117] "RemoveContainer" containerID="9fad625b91c4f4a963210889c31deed8e2cf4bc1eb4474ee4bb40520b5de9912" Feb 16 11:37:05 crc kubenswrapper[4797]: I0216 11:37:05.074978 4797 scope.go:117] "RemoveContainer" containerID="98a70302fbf2b9c6b1e49c622daa3a979d6e1cafc85f5c6e3f75d36132846f5d" Feb 16 11:37:05 crc kubenswrapper[4797]: I0216 11:37:05.122248 4797 scope.go:117] "RemoveContainer" containerID="d3088d8ca7f9e63d10733404e9a3bb8a8fc52a61ddb265b139346ddff28504f6" Feb 16 11:37:05 crc kubenswrapper[4797]: I0216 11:37:05.162036 4797 scope.go:117] "RemoveContainer" containerID="6c48e9dc3fac4bb4dd211d6331b5b437ef64a6f2faf0d1bc7878354a62b8cad5" Feb 16 11:37:05 crc kubenswrapper[4797]: I0216 11:37:05.196673 4797 scope.go:117] "RemoveContainer" containerID="1b7ee416101efa00dda5d3b6c8121977fb41d0e8b2fef83cceca88c40fd0436e" Feb 16 11:37:05 crc kubenswrapper[4797]: I0216 11:37:05.220611 4797 scope.go:117] "RemoveContainer" containerID="4bec00d59423bae312555e2b3bf85f34ca49fe55c6012ab5e98e18774b8c943a" Feb 16 11:37:10 crc kubenswrapper[4797]: I0216 11:37:10.982727 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:37:10 crc kubenswrapper[4797]: E0216 11:37:10.983477 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:37:15 crc kubenswrapper[4797]: E0216 11:37:15.990303 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:37:21 crc kubenswrapper[4797]: I0216 11:37:21.982968 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:37:21 crc kubenswrapper[4797]: E0216 11:37:21.983429 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:37:26 crc kubenswrapper[4797]: E0216 11:37:26.984041 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:37:34 crc kubenswrapper[4797]: I0216 11:37:34.039360 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tmss6"] Feb 16 11:37:34 crc kubenswrapper[4797]: I0216 11:37:34.048873 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tmss6"] Feb 16 11:37:35 crc kubenswrapper[4797]: I0216 11:37:35.996361 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed21e184-023a-429d-9cda-9f23c24a84e7" path="/var/lib/kubelet/pods/ed21e184-023a-429d-9cda-9f23c24a84e7/volumes" Feb 16 11:37:36 crc kubenswrapper[4797]: I0216 11:37:36.982955 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:37:36 crc kubenswrapper[4797]: E0216 11:37:36.983827 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:37:38 crc kubenswrapper[4797]: E0216 11:37:38.986053 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:37:49 crc kubenswrapper[4797]: I0216 11:37:49.982734 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:37:49 crc kubenswrapper[4797]: E0216 11:37:49.983677 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:37:49 crc kubenswrapper[4797]: E0216 11:37:49.984693 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:37:59 crc kubenswrapper[4797]: I0216 11:37:59.046911 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m7cv5"] Feb 16 11:37:59 crc kubenswrapper[4797]: I0216 11:37:59.054876 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-s2mq9"] Feb 16 11:37:59 crc kubenswrapper[4797]: I0216 11:37:59.061929 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-m7cv5"] Feb 16 11:37:59 crc kubenswrapper[4797]: I0216 11:37:59.068822 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-s2mq9"] Feb 16 11:38:00 crc kubenswrapper[4797]: I0216 11:38:00.005763 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ccd652-9aab-49ee-bbad-cdb91133f3a6" path="/var/lib/kubelet/pods/25ccd652-9aab-49ee-bbad-cdb91133f3a6/volumes" Feb 16 11:38:00 crc kubenswrapper[4797]: I0216 11:38:00.007649 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="982c5633-bbd8-4437-b8b5-11e3b6783e19" path="/var/lib/kubelet/pods/982c5633-bbd8-4437-b8b5-11e3b6783e19/volumes" Feb 16 11:38:00 crc kubenswrapper[4797]: I0216 11:38:00.983192 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:38:00 crc kubenswrapper[4797]: E0216 11:38:00.983454 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:38:00 crc kubenswrapper[4797]: E0216 11:38:00.985303 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:38:05 crc kubenswrapper[4797]: I0216 11:38:05.426413 4797 scope.go:117] "RemoveContainer" containerID="4f9d3169f2f6e894bd96783bcddf67034cbf3d40979ad943b78a467f492c124b" Feb 16 11:38:05 crc kubenswrapper[4797]: I0216 11:38:05.476775 4797 scope.go:117] "RemoveContainer" containerID="d12dffc4553c752285f065aa7f527e129b1ac2a2e6055a45e5b3a47397d49ca0" Feb 16 11:38:05 crc kubenswrapper[4797]: I0216 11:38:05.564025 4797 scope.go:117] "RemoveContainer" containerID="69c3b2a8783e3241c7aa07cb63c33e161076d61bf61b41d6fd6bf43af290e988" Feb 16 11:38:13 crc kubenswrapper[4797]: I0216 11:38:13.983751 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:38:13 crc kubenswrapper[4797]: E0216 11:38:13.984815 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:38:14 crc kubenswrapper[4797]: E0216 11:38:14.985865 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:38:27 crc kubenswrapper[4797]: I0216 11:38:27.983980 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:38:27 crc kubenswrapper[4797]: E0216 11:38:27.984641 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:38:29 crc kubenswrapper[4797]: E0216 11:38:29.984142 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:38:39 crc kubenswrapper[4797]: I0216 11:38:39.983554 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:38:39 crc kubenswrapper[4797]: E0216 11:38:39.984377 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:38:41 crc kubenswrapper[4797]: E0216 11:38:41.985132 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:38:43 crc kubenswrapper[4797]: I0216 11:38:43.038995 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bxff6"] Feb 16 11:38:43 crc kubenswrapper[4797]: I0216 11:38:43.046926 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bxff6"] Feb 16 11:38:44 crc kubenswrapper[4797]: I0216 11:38:44.001417 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b3a376f-d5a3-4695-8ace-93c71d98e93b" path="/var/lib/kubelet/pods/0b3a376f-d5a3-4695-8ace-93c71d98e93b/volumes" Feb 16 11:38:54 crc kubenswrapper[4797]: I0216 11:38:54.982842 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:38:54 crc kubenswrapper[4797]: E0216 11:38:54.983654 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:38:54 crc kubenswrapper[4797]: E0216 11:38:54.987847 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:39:05 crc kubenswrapper[4797]: I0216 11:39:05.739092 4797 scope.go:117] "RemoveContainer" containerID="021289fd304faa39aaf690303feba21fcbb31f01149499091b3dd610d009e0a6" Feb 16 11:39:07 crc kubenswrapper[4797]: E0216 11:39:07.985503 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:39:08 crc kubenswrapper[4797]: I0216 11:39:08.983219 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:39:08 crc kubenswrapper[4797]: E0216 11:39:08.984055 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:39:19 crc kubenswrapper[4797]: I0216 11:39:19.982412 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:39:20 crc kubenswrapper[4797]: I0216 11:39:20.458671 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"436dfb51bb84994be6a8f00425e5ad1ba117a367b8df143eb4e404d177a03be0"} Feb 16 11:39:21 crc kubenswrapper[4797]: E0216 11:39:21.987742 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:39:33 crc kubenswrapper[4797]: E0216 11:39:33.985409 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:39:48 crc kubenswrapper[4797]: E0216 11:39:48.986000 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:39:59 crc kubenswrapper[4797]: E0216 11:39:59.986211 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:40:12 crc kubenswrapper[4797]: E0216 11:40:12.986036 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:40:23 crc kubenswrapper[4797]: E0216 11:40:23.986447 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:40:36 crc kubenswrapper[4797]: E0216 11:40:36.984845 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:40:50 crc kubenswrapper[4797]: E0216 11:40:50.984923 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:41:04 crc kubenswrapper[4797]: E0216 11:41:04.985305 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:41:17 crc kubenswrapper[4797]: E0216 11:41:17.985882 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:41:28 crc kubenswrapper[4797]: I0216 11:41:28.988187 4797 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 11:41:29 crc kubenswrapper[4797]: E0216 11:41:29.117836 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:41:29 crc kubenswrapper[4797]: E0216 11:41:29.117965 4797 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:41:29 crc kubenswrapper[4797]: E0216 11:41:29.118397 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fvxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dhgrw_openstack(895bed8d-c376-47ad-8fa6-3cf0f07399c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 11:41:29 crc kubenswrapper[4797]: E0216 11:41:29.119666 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:41:39 crc kubenswrapper[4797]: E0216 11:41:39.984977 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:41:41 crc kubenswrapper[4797]: I0216 11:41:41.703525 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:41:41 crc kubenswrapper[4797]: I0216 11:41:41.703916 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.308890 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hdjb2"] Feb 16 11:41:44 crc kubenswrapper[4797]: E0216 11:41:44.310717 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff30bff-a503-4be0-9f7d-22a650efac7f" containerName="extract-utilities" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.310830 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff30bff-a503-4be0-9f7d-22a650efac7f" containerName="extract-utilities" Feb 16 11:41:44 crc kubenswrapper[4797]: E0216 11:41:44.310915 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff30bff-a503-4be0-9f7d-22a650efac7f" containerName="registry-server" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.310998 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff30bff-a503-4be0-9f7d-22a650efac7f" containerName="registry-server" Feb 16 11:41:44 crc kubenswrapper[4797]: E0216 11:41:44.311117 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff30bff-a503-4be0-9f7d-22a650efac7f" containerName="extract-content" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.311199 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff30bff-a503-4be0-9f7d-22a650efac7f" containerName="extract-content" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.311511 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff30bff-a503-4be0-9f7d-22a650efac7f" containerName="registry-server" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.313773 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.320182 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdjb2"] Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.447637 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef3612d-d551-4251-aa3d-706cc53f4f58-utilities\") pod \"redhat-operators-hdjb2\" (UID: \"cef3612d-d551-4251-aa3d-706cc53f4f58\") " pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.447828 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw8v2\" (UniqueName: \"kubernetes.io/projected/cef3612d-d551-4251-aa3d-706cc53f4f58-kube-api-access-vw8v2\") pod \"redhat-operators-hdjb2\" (UID: \"cef3612d-d551-4251-aa3d-706cc53f4f58\") " pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.448005 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef3612d-d551-4251-aa3d-706cc53f4f58-catalog-content\") pod \"redhat-operators-hdjb2\" (UID: \"cef3612d-d551-4251-aa3d-706cc53f4f58\") " pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.550456 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw8v2\" (UniqueName: \"kubernetes.io/projected/cef3612d-d551-4251-aa3d-706cc53f4f58-kube-api-access-vw8v2\") pod \"redhat-operators-hdjb2\" (UID: \"cef3612d-d551-4251-aa3d-706cc53f4f58\") " pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.550593 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef3612d-d551-4251-aa3d-706cc53f4f58-catalog-content\") pod \"redhat-operators-hdjb2\" (UID: \"cef3612d-d551-4251-aa3d-706cc53f4f58\") " pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.550671 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef3612d-d551-4251-aa3d-706cc53f4f58-utilities\") pod \"redhat-operators-hdjb2\" (UID: \"cef3612d-d551-4251-aa3d-706cc53f4f58\") " pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.551059 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef3612d-d551-4251-aa3d-706cc53f4f58-catalog-content\") pod \"redhat-operators-hdjb2\" (UID: \"cef3612d-d551-4251-aa3d-706cc53f4f58\") " pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.551176 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef3612d-d551-4251-aa3d-706cc53f4f58-utilities\") pod \"redhat-operators-hdjb2\" (UID: \"cef3612d-d551-4251-aa3d-706cc53f4f58\") " pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.570954 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw8v2\" (UniqueName: \"kubernetes.io/projected/cef3612d-d551-4251-aa3d-706cc53f4f58-kube-api-access-vw8v2\") pod \"redhat-operators-hdjb2\" (UID: \"cef3612d-d551-4251-aa3d-706cc53f4f58\") " pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:44 crc kubenswrapper[4797]: I0216 11:41:44.639270 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:45 crc kubenswrapper[4797]: I0216 11:41:45.083336 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hdjb2"] Feb 16 11:41:45 crc kubenswrapper[4797]: I0216 11:41:45.996818 4797 generic.go:334] "Generic (PLEG): container finished" podID="cef3612d-d551-4251-aa3d-706cc53f4f58" containerID="23c55d608771c33b0d5562229ffd97bb14cf924e921fb492a831149c7f6db3d8" exitCode=0 Feb 16 11:41:45 crc kubenswrapper[4797]: I0216 11:41:45.998612 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdjb2" event={"ID":"cef3612d-d551-4251-aa3d-706cc53f4f58","Type":"ContainerDied","Data":"23c55d608771c33b0d5562229ffd97bb14cf924e921fb492a831149c7f6db3d8"} Feb 16 11:41:45 crc kubenswrapper[4797]: I0216 11:41:45.998662 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdjb2" event={"ID":"cef3612d-d551-4251-aa3d-706cc53f4f58","Type":"ContainerStarted","Data":"534468036c4bd8df979d2c6ec1c5715fffc4694a83836d68fc0d9f4107db5b15"} Feb 16 11:41:47 crc kubenswrapper[4797]: I0216 11:41:47.008737 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdjb2" event={"ID":"cef3612d-d551-4251-aa3d-706cc53f4f58","Type":"ContainerStarted","Data":"3d2df5762e8706395799db7433e727de14d13375f452fec751b001e6686834b5"} Feb 16 11:41:48 crc kubenswrapper[4797]: I0216 11:41:48.023186 4797 generic.go:334] "Generic (PLEG): container finished" podID="cef3612d-d551-4251-aa3d-706cc53f4f58" containerID="3d2df5762e8706395799db7433e727de14d13375f452fec751b001e6686834b5" exitCode=0 Feb 16 11:41:48 crc kubenswrapper[4797]: I0216 11:41:48.023380 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdjb2" event={"ID":"cef3612d-d551-4251-aa3d-706cc53f4f58","Type":"ContainerDied","Data":"3d2df5762e8706395799db7433e727de14d13375f452fec751b001e6686834b5"} Feb 16 11:41:49 crc kubenswrapper[4797]: I0216 11:41:49.041315 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdjb2" event={"ID":"cef3612d-d551-4251-aa3d-706cc53f4f58","Type":"ContainerStarted","Data":"fdf14d7f629bd6a35ffb640a610229565fb00c4927b29631bca9e462924160d3"} Feb 16 11:41:49 crc kubenswrapper[4797]: I0216 11:41:49.077235 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hdjb2" podStartSLOduration=2.664008018 podStartE2EDuration="5.077208206s" podCreationTimestamp="2026-02-16 11:41:44 +0000 UTC" firstStartedPulling="2026-02-16 11:41:45.999807705 +0000 UTC m=+2100.719992695" lastFinishedPulling="2026-02-16 11:41:48.413007893 +0000 UTC m=+2103.133192883" observedRunningTime="2026-02-16 11:41:49.069113025 +0000 UTC m=+2103.789298005" watchObservedRunningTime="2026-02-16 11:41:49.077208206 +0000 UTC m=+2103.797393226" Feb 16 11:41:51 crc kubenswrapper[4797]: E0216 11:41:51.985154 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:41:54 crc kubenswrapper[4797]: I0216 11:41:54.639848 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:54 crc kubenswrapper[4797]: I0216 11:41:54.640225 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:54 crc kubenswrapper[4797]: I0216 11:41:54.682613 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:55 crc kubenswrapper[4797]: I0216 11:41:55.153780 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:55 crc kubenswrapper[4797]: I0216 11:41:55.220829 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdjb2"] Feb 16 11:41:57 crc kubenswrapper[4797]: I0216 11:41:57.133900 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hdjb2" podUID="cef3612d-d551-4251-aa3d-706cc53f4f58" containerName="registry-server" containerID="cri-o://fdf14d7f629bd6a35ffb640a610229565fb00c4927b29631bca9e462924160d3" gracePeriod=2 Feb 16 11:41:57 crc kubenswrapper[4797]: I0216 11:41:57.631283 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:57 crc kubenswrapper[4797]: I0216 11:41:57.758225 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw8v2\" (UniqueName: \"kubernetes.io/projected/cef3612d-d551-4251-aa3d-706cc53f4f58-kube-api-access-vw8v2\") pod \"cef3612d-d551-4251-aa3d-706cc53f4f58\" (UID: \"cef3612d-d551-4251-aa3d-706cc53f4f58\") " Feb 16 11:41:57 crc kubenswrapper[4797]: I0216 11:41:57.758419 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef3612d-d551-4251-aa3d-706cc53f4f58-catalog-content\") pod \"cef3612d-d551-4251-aa3d-706cc53f4f58\" (UID: \"cef3612d-d551-4251-aa3d-706cc53f4f58\") " Feb 16 11:41:57 crc kubenswrapper[4797]: I0216 11:41:57.758551 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef3612d-d551-4251-aa3d-706cc53f4f58-utilities\") pod \"cef3612d-d551-4251-aa3d-706cc53f4f58\" (UID: \"cef3612d-d551-4251-aa3d-706cc53f4f58\") " Feb 16 11:41:57 crc kubenswrapper[4797]: I0216 11:41:57.760027 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef3612d-d551-4251-aa3d-706cc53f4f58-utilities" (OuterVolumeSpecName: "utilities") pod "cef3612d-d551-4251-aa3d-706cc53f4f58" (UID: "cef3612d-d551-4251-aa3d-706cc53f4f58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:41:57 crc kubenswrapper[4797]: I0216 11:41:57.764189 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef3612d-d551-4251-aa3d-706cc53f4f58-kube-api-access-vw8v2" (OuterVolumeSpecName: "kube-api-access-vw8v2") pod "cef3612d-d551-4251-aa3d-706cc53f4f58" (UID: "cef3612d-d551-4251-aa3d-706cc53f4f58"). InnerVolumeSpecName "kube-api-access-vw8v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:41:57 crc kubenswrapper[4797]: I0216 11:41:57.860770 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw8v2\" (UniqueName: \"kubernetes.io/projected/cef3612d-d551-4251-aa3d-706cc53f4f58-kube-api-access-vw8v2\") on node \"crc\" DevicePath \"\"" Feb 16 11:41:57 crc kubenswrapper[4797]: I0216 11:41:57.860808 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef3612d-d551-4251-aa3d-706cc53f4f58-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:41:58 crc kubenswrapper[4797]: I0216 11:41:58.144806 4797 generic.go:334] "Generic (PLEG): container finished" podID="cef3612d-d551-4251-aa3d-706cc53f4f58" containerID="fdf14d7f629bd6a35ffb640a610229565fb00c4927b29631bca9e462924160d3" exitCode=0 Feb 16 11:41:58 crc kubenswrapper[4797]: I0216 11:41:58.144841 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdjb2" event={"ID":"cef3612d-d551-4251-aa3d-706cc53f4f58","Type":"ContainerDied","Data":"fdf14d7f629bd6a35ffb640a610229565fb00c4927b29631bca9e462924160d3"} Feb 16 11:41:58 crc kubenswrapper[4797]: I0216 11:41:58.144869 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hdjb2" event={"ID":"cef3612d-d551-4251-aa3d-706cc53f4f58","Type":"ContainerDied","Data":"534468036c4bd8df979d2c6ec1c5715fffc4694a83836d68fc0d9f4107db5b15"} Feb 16 11:41:58 crc kubenswrapper[4797]: I0216 11:41:58.144886 4797 scope.go:117] "RemoveContainer" containerID="fdf14d7f629bd6a35ffb640a610229565fb00c4927b29631bca9e462924160d3" Feb 16 11:41:58 crc kubenswrapper[4797]: I0216 11:41:58.144897 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hdjb2" Feb 16 11:41:58 crc kubenswrapper[4797]: I0216 11:41:58.166108 4797 scope.go:117] "RemoveContainer" containerID="3d2df5762e8706395799db7433e727de14d13375f452fec751b001e6686834b5" Feb 16 11:41:58 crc kubenswrapper[4797]: I0216 11:41:58.184953 4797 scope.go:117] "RemoveContainer" containerID="23c55d608771c33b0d5562229ffd97bb14cf924e921fb492a831149c7f6db3d8" Feb 16 11:41:58 crc kubenswrapper[4797]: I0216 11:41:58.246633 4797 scope.go:117] "RemoveContainer" containerID="fdf14d7f629bd6a35ffb640a610229565fb00c4927b29631bca9e462924160d3" Feb 16 11:41:58 crc kubenswrapper[4797]: E0216 11:41:58.247143 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdf14d7f629bd6a35ffb640a610229565fb00c4927b29631bca9e462924160d3\": container with ID starting with fdf14d7f629bd6a35ffb640a610229565fb00c4927b29631bca9e462924160d3 not found: ID does not exist" containerID="fdf14d7f629bd6a35ffb640a610229565fb00c4927b29631bca9e462924160d3" Feb 16 11:41:58 crc kubenswrapper[4797]: I0216 11:41:58.247174 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf14d7f629bd6a35ffb640a610229565fb00c4927b29631bca9e462924160d3"} err="failed to get container status \"fdf14d7f629bd6a35ffb640a610229565fb00c4927b29631bca9e462924160d3\": rpc error: code = NotFound desc = could not find container \"fdf14d7f629bd6a35ffb640a610229565fb00c4927b29631bca9e462924160d3\": container with ID starting with fdf14d7f629bd6a35ffb640a610229565fb00c4927b29631bca9e462924160d3 not found: ID does not exist" Feb 16 11:41:58 crc kubenswrapper[4797]: I0216 11:41:58.247196 4797 scope.go:117] "RemoveContainer" containerID="3d2df5762e8706395799db7433e727de14d13375f452fec751b001e6686834b5" Feb 16 11:41:58 crc kubenswrapper[4797]: E0216 11:41:58.247572 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2df5762e8706395799db7433e727de14d13375f452fec751b001e6686834b5\": container with ID starting with 3d2df5762e8706395799db7433e727de14d13375f452fec751b001e6686834b5 not found: ID does not exist" containerID="3d2df5762e8706395799db7433e727de14d13375f452fec751b001e6686834b5" Feb 16 11:41:58 crc kubenswrapper[4797]: I0216 11:41:58.247608 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2df5762e8706395799db7433e727de14d13375f452fec751b001e6686834b5"} err="failed to get container status \"3d2df5762e8706395799db7433e727de14d13375f452fec751b001e6686834b5\": rpc error: code = NotFound desc = could not find container \"3d2df5762e8706395799db7433e727de14d13375f452fec751b001e6686834b5\": container with ID starting with 3d2df5762e8706395799db7433e727de14d13375f452fec751b001e6686834b5 not found: ID does not exist" Feb 16 11:41:58 crc kubenswrapper[4797]: I0216 11:41:58.247620 4797 scope.go:117] "RemoveContainer" containerID="23c55d608771c33b0d5562229ffd97bb14cf924e921fb492a831149c7f6db3d8" Feb 16 11:41:58 crc kubenswrapper[4797]: E0216 11:41:58.247873 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23c55d608771c33b0d5562229ffd97bb14cf924e921fb492a831149c7f6db3d8\": container with ID starting with 23c55d608771c33b0d5562229ffd97bb14cf924e921fb492a831149c7f6db3d8 not found: ID does not exist" containerID="23c55d608771c33b0d5562229ffd97bb14cf924e921fb492a831149c7f6db3d8" Feb 16 11:41:58 crc kubenswrapper[4797]: I0216 11:41:58.247904 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23c55d608771c33b0d5562229ffd97bb14cf924e921fb492a831149c7f6db3d8"} err="failed to get container status \"23c55d608771c33b0d5562229ffd97bb14cf924e921fb492a831149c7f6db3d8\": rpc error: code = NotFound desc = could not find container \"23c55d608771c33b0d5562229ffd97bb14cf924e921fb492a831149c7f6db3d8\": container with ID starting with 23c55d608771c33b0d5562229ffd97bb14cf924e921fb492a831149c7f6db3d8 not found: ID does not exist" Feb 16 11:42:00 crc kubenswrapper[4797]: I0216 11:41:59.999737 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef3612d-d551-4251-aa3d-706cc53f4f58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cef3612d-d551-4251-aa3d-706cc53f4f58" (UID: "cef3612d-d551-4251-aa3d-706cc53f4f58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:42:00 crc kubenswrapper[4797]: I0216 11:42:00.004837 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef3612d-d551-4251-aa3d-706cc53f4f58-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:42:00 crc kubenswrapper[4797]: I0216 11:42:00.288160 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hdjb2"] Feb 16 11:42:00 crc kubenswrapper[4797]: I0216 11:42:00.298237 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hdjb2"] Feb 16 11:42:01 crc kubenswrapper[4797]: I0216 11:42:01.996647 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cef3612d-d551-4251-aa3d-706cc53f4f58" path="/var/lib/kubelet/pods/cef3612d-d551-4251-aa3d-706cc53f4f58/volumes" Feb 16 11:42:05 crc kubenswrapper[4797]: E0216 11:42:05.994212 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:42:11 crc kubenswrapper[4797]: I0216 11:42:11.704085 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:42:11 crc kubenswrapper[4797]: I0216 11:42:11.704745 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:42:20 crc kubenswrapper[4797]: E0216 11:42:20.986158 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.159892 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2djdc"] Feb 16 11:42:30 crc kubenswrapper[4797]: E0216 11:42:30.160745 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef3612d-d551-4251-aa3d-706cc53f4f58" containerName="registry-server" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.160757 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef3612d-d551-4251-aa3d-706cc53f4f58" containerName="registry-server" Feb 16 11:42:30 crc kubenswrapper[4797]: E0216 11:42:30.160776 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef3612d-d551-4251-aa3d-706cc53f4f58" containerName="extract-utilities" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.160784 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef3612d-d551-4251-aa3d-706cc53f4f58" containerName="extract-utilities" Feb 16 11:42:30 crc kubenswrapper[4797]: E0216 11:42:30.160796 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef3612d-d551-4251-aa3d-706cc53f4f58" containerName="extract-content" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.160802 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef3612d-d551-4251-aa3d-706cc53f4f58" containerName="extract-content" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.160999 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef3612d-d551-4251-aa3d-706cc53f4f58" containerName="registry-server" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.162420 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.175108 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2djdc"] Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.250320 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzbbk\" (UniqueName: \"kubernetes.io/projected/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-kube-api-access-nzbbk\") pod \"certified-operators-2djdc\" (UID: \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\") " pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.250378 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-catalog-content\") pod \"certified-operators-2djdc\" (UID: \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\") " pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.250778 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-utilities\") pod \"certified-operators-2djdc\" (UID: \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\") " pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.352210 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-utilities\") pod \"certified-operators-2djdc\" (UID: \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\") " pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.352341 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzbbk\" (UniqueName: \"kubernetes.io/projected/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-kube-api-access-nzbbk\") pod \"certified-operators-2djdc\" (UID: \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\") " pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.352363 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-catalog-content\") pod \"certified-operators-2djdc\" (UID: \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\") " pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.352725 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-utilities\") pod \"certified-operators-2djdc\" (UID: \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\") " pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.352735 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-catalog-content\") pod \"certified-operators-2djdc\" (UID: \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\") " pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.377846 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzbbk\" (UniqueName: \"kubernetes.io/projected/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-kube-api-access-nzbbk\") pod \"certified-operators-2djdc\" (UID: \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\") " pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.479127 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:30 crc kubenswrapper[4797]: I0216 11:42:30.985230 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2djdc"] Feb 16 11:42:31 crc kubenswrapper[4797]: I0216 11:42:31.487480 4797 generic.go:334] "Generic (PLEG): container finished" podID="9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" containerID="8169c3a04b7bb7ef6c492bd876f7166ccc282fa25528b05e9c1b1cf01a576ec5" exitCode=0 Feb 16 11:42:31 crc kubenswrapper[4797]: I0216 11:42:31.487611 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2djdc" event={"ID":"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801","Type":"ContainerDied","Data":"8169c3a04b7bb7ef6c492bd876f7166ccc282fa25528b05e9c1b1cf01a576ec5"} Feb 16 11:42:31 crc kubenswrapper[4797]: I0216 11:42:31.487776 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2djdc" event={"ID":"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801","Type":"ContainerStarted","Data":"05817b4821e4615949ee3f9afb294a78bcabb78a0a5f12acb02273925a97e193"} Feb 16 11:42:32 crc kubenswrapper[4797]: E0216 11:42:32.985085 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:42:34 crc kubenswrapper[4797]: I0216 11:42:34.526385 4797 generic.go:334] "Generic (PLEG): container finished" podID="9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" containerID="ee94ef95a86bd9bb9f0e0bc18d80759e60dc6006435e8c7eab2b3089dc1454a6" exitCode=0 Feb 16 11:42:34 crc kubenswrapper[4797]: I0216 11:42:34.526512 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2djdc" event={"ID":"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801","Type":"ContainerDied","Data":"ee94ef95a86bd9bb9f0e0bc18d80759e60dc6006435e8c7eab2b3089dc1454a6"} Feb 16 11:42:35 crc kubenswrapper[4797]: I0216 11:42:35.537125 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2djdc" event={"ID":"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801","Type":"ContainerStarted","Data":"d898d1a17f276d200327ecee3b51ef42e46d6fa6dcc8e9d1f6e9ea60a1da6e96"} Feb 16 11:42:35 crc kubenswrapper[4797]: I0216 11:42:35.568211 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2djdc" podStartSLOduration=2.151290841 podStartE2EDuration="5.568195203s" podCreationTimestamp="2026-02-16 11:42:30 +0000 UTC" firstStartedPulling="2026-02-16 11:42:31.48900455 +0000 UTC m=+2146.209189530" lastFinishedPulling="2026-02-16 11:42:34.905908912 +0000 UTC m=+2149.626093892" observedRunningTime="2026-02-16 11:42:35.56583706 +0000 UTC m=+2150.286022040" watchObservedRunningTime="2026-02-16 11:42:35.568195203 +0000 UTC m=+2150.288380183" Feb 16 11:42:40 crc kubenswrapper[4797]: I0216 11:42:40.479975 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:40 crc kubenswrapper[4797]: I0216 11:42:40.480502 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:40 crc kubenswrapper[4797]: I0216 11:42:40.534302 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:40 crc kubenswrapper[4797]: I0216 11:42:40.629182 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:40 crc kubenswrapper[4797]: I0216 11:42:40.787839 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2djdc"] Feb 16 11:42:41 crc kubenswrapper[4797]: I0216 11:42:41.703827 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:42:41 crc kubenswrapper[4797]: I0216 11:42:41.703951 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:42:41 crc kubenswrapper[4797]: I0216 11:42:41.704035 4797 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:42:41 crc kubenswrapper[4797]: I0216 11:42:41.705433 4797 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"436dfb51bb84994be6a8f00425e5ad1ba117a367b8df143eb4e404d177a03be0"} pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:42:41 crc kubenswrapper[4797]: I0216 11:42:41.705655 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" containerID="cri-o://436dfb51bb84994be6a8f00425e5ad1ba117a367b8df143eb4e404d177a03be0" gracePeriod=600 Feb 16 11:42:42 crc kubenswrapper[4797]: I0216 11:42:42.604820 4797 generic.go:334] "Generic (PLEG): container finished" podID="128f4e85-fd17-4281-97d2-872fda792b21" containerID="436dfb51bb84994be6a8f00425e5ad1ba117a367b8df143eb4e404d177a03be0" exitCode=0 Feb 16 11:42:42 crc kubenswrapper[4797]: I0216 11:42:42.604911 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerDied","Data":"436dfb51bb84994be6a8f00425e5ad1ba117a367b8df143eb4e404d177a03be0"} Feb 16 11:42:42 crc kubenswrapper[4797]: I0216 11:42:42.605449 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8"} Feb 16 11:42:42 crc kubenswrapper[4797]: I0216 11:42:42.605473 4797 scope.go:117] "RemoveContainer" containerID="aca29b183163a44c719ac643b2abd78c800ed3bfb825f84137bd52bc212bbca5" Feb 16 11:42:42 crc kubenswrapper[4797]: I0216 11:42:42.605553 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2djdc" podUID="9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" containerName="registry-server" containerID="cri-o://d898d1a17f276d200327ecee3b51ef42e46d6fa6dcc8e9d1f6e9ea60a1da6e96" gracePeriod=2 Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.182676 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.258460 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzbbk\" (UniqueName: \"kubernetes.io/projected/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-kube-api-access-nzbbk\") pod \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\" (UID: \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\") " Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.258500 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-utilities\") pod \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\" (UID: \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\") " Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.258612 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-catalog-content\") pod \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\" (UID: \"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801\") " Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.260022 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-utilities" (OuterVolumeSpecName: "utilities") pod "9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" (UID: "9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.263701 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-kube-api-access-nzbbk" (OuterVolumeSpecName: "kube-api-access-nzbbk") pod "9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" (UID: "9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801"). InnerVolumeSpecName "kube-api-access-nzbbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.314965 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" (UID: "9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.360728 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzbbk\" (UniqueName: \"kubernetes.io/projected/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-kube-api-access-nzbbk\") on node \"crc\" DevicePath \"\"" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.361074 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.361086 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.620227 4797 generic.go:334] "Generic (PLEG): container finished" podID="9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" containerID="d898d1a17f276d200327ecee3b51ef42e46d6fa6dcc8e9d1f6e9ea60a1da6e96" exitCode=0 Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.620306 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2djdc" event={"ID":"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801","Type":"ContainerDied","Data":"d898d1a17f276d200327ecee3b51ef42e46d6fa6dcc8e9d1f6e9ea60a1da6e96"} Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.620369 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2djdc" event={"ID":"9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801","Type":"ContainerDied","Data":"05817b4821e4615949ee3f9afb294a78bcabb78a0a5f12acb02273925a97e193"} Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.620371 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2djdc" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.620393 4797 scope.go:117] "RemoveContainer" containerID="d898d1a17f276d200327ecee3b51ef42e46d6fa6dcc8e9d1f6e9ea60a1da6e96" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.650290 4797 scope.go:117] "RemoveContainer" containerID="ee94ef95a86bd9bb9f0e0bc18d80759e60dc6006435e8c7eab2b3089dc1454a6" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.688292 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2djdc"] Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.697439 4797 scope.go:117] "RemoveContainer" containerID="8169c3a04b7bb7ef6c492bd876f7166ccc282fa25528b05e9c1b1cf01a576ec5" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.699563 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2djdc"] Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.730385 4797 scope.go:117] "RemoveContainer" containerID="d898d1a17f276d200327ecee3b51ef42e46d6fa6dcc8e9d1f6e9ea60a1da6e96" Feb 16 11:42:43 crc kubenswrapper[4797]: E0216 11:42:43.731051 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d898d1a17f276d200327ecee3b51ef42e46d6fa6dcc8e9d1f6e9ea60a1da6e96\": container with ID starting with d898d1a17f276d200327ecee3b51ef42e46d6fa6dcc8e9d1f6e9ea60a1da6e96 not found: ID does not exist" containerID="d898d1a17f276d200327ecee3b51ef42e46d6fa6dcc8e9d1f6e9ea60a1da6e96" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.731120 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d898d1a17f276d200327ecee3b51ef42e46d6fa6dcc8e9d1f6e9ea60a1da6e96"} err="failed to get container status \"d898d1a17f276d200327ecee3b51ef42e46d6fa6dcc8e9d1f6e9ea60a1da6e96\": rpc error: code = NotFound desc = could not find container \"d898d1a17f276d200327ecee3b51ef42e46d6fa6dcc8e9d1f6e9ea60a1da6e96\": container with ID starting with d898d1a17f276d200327ecee3b51ef42e46d6fa6dcc8e9d1f6e9ea60a1da6e96 not found: ID does not exist" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.731157 4797 scope.go:117] "RemoveContainer" containerID="ee94ef95a86bd9bb9f0e0bc18d80759e60dc6006435e8c7eab2b3089dc1454a6" Feb 16 11:42:43 crc kubenswrapper[4797]: E0216 11:42:43.731744 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee94ef95a86bd9bb9f0e0bc18d80759e60dc6006435e8c7eab2b3089dc1454a6\": container with ID starting with ee94ef95a86bd9bb9f0e0bc18d80759e60dc6006435e8c7eab2b3089dc1454a6 not found: ID does not exist" containerID="ee94ef95a86bd9bb9f0e0bc18d80759e60dc6006435e8c7eab2b3089dc1454a6" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.731817 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee94ef95a86bd9bb9f0e0bc18d80759e60dc6006435e8c7eab2b3089dc1454a6"} err="failed to get container status \"ee94ef95a86bd9bb9f0e0bc18d80759e60dc6006435e8c7eab2b3089dc1454a6\": rpc error: code = NotFound desc = could not find container \"ee94ef95a86bd9bb9f0e0bc18d80759e60dc6006435e8c7eab2b3089dc1454a6\": container with ID starting with ee94ef95a86bd9bb9f0e0bc18d80759e60dc6006435e8c7eab2b3089dc1454a6 not found: ID does not exist" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.731872 4797 scope.go:117] "RemoveContainer" containerID="8169c3a04b7bb7ef6c492bd876f7166ccc282fa25528b05e9c1b1cf01a576ec5" Feb 16 11:42:43 crc kubenswrapper[4797]: E0216 11:42:43.732286 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8169c3a04b7bb7ef6c492bd876f7166ccc282fa25528b05e9c1b1cf01a576ec5\": container with ID starting with 8169c3a04b7bb7ef6c492bd876f7166ccc282fa25528b05e9c1b1cf01a576ec5 not found: ID does not exist" containerID="8169c3a04b7bb7ef6c492bd876f7166ccc282fa25528b05e9c1b1cf01a576ec5" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.732324 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8169c3a04b7bb7ef6c492bd876f7166ccc282fa25528b05e9c1b1cf01a576ec5"} err="failed to get container status \"8169c3a04b7bb7ef6c492bd876f7166ccc282fa25528b05e9c1b1cf01a576ec5\": rpc error: code = NotFound desc = could not find container \"8169c3a04b7bb7ef6c492bd876f7166ccc282fa25528b05e9c1b1cf01a576ec5\": container with ID starting with 8169c3a04b7bb7ef6c492bd876f7166ccc282fa25528b05e9c1b1cf01a576ec5 not found: ID does not exist" Feb 16 11:42:43 crc kubenswrapper[4797]: I0216 11:42:43.995436 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" path="/var/lib/kubelet/pods/9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801/volumes" Feb 16 11:42:46 crc kubenswrapper[4797]: E0216 11:42:46.985303 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:42:59 crc kubenswrapper[4797]: E0216 11:42:59.989221 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:43:11 crc kubenswrapper[4797]: E0216 11:43:11.984677 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:43:23 crc kubenswrapper[4797]: E0216 11:43:23.986655 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:43:38 crc kubenswrapper[4797]: E0216 11:43:38.984081 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:43:49 crc kubenswrapper[4797]: E0216 11:43:49.985322 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:44:03 crc kubenswrapper[4797]: E0216 11:44:03.990166 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:44:16 crc kubenswrapper[4797]: E0216 11:44:16.000437 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.570212 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dzlgp"] Feb 16 11:44:25 crc kubenswrapper[4797]: E0216 11:44:25.571268 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" containerName="registry-server" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.571290 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" containerName="registry-server" Feb 16 11:44:25 crc kubenswrapper[4797]: E0216 11:44:25.571311 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" containerName="extract-utilities" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.571320 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" containerName="extract-utilities" Feb 16 11:44:25 crc kubenswrapper[4797]: E0216 11:44:25.571354 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" containerName="extract-content" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.571363 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" containerName="extract-content" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.571651 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9ff3d1-e61b-4d4c-9436-cac0b4cc9801" containerName="registry-server" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.573505 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.587530 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dzlgp"] Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.625742 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e625b772-ed05-45a7-915a-94bc5a59a6e5-utilities\") pod \"redhat-marketplace-dzlgp\" (UID: \"e625b772-ed05-45a7-915a-94bc5a59a6e5\") " pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.625866 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e625b772-ed05-45a7-915a-94bc5a59a6e5-catalog-content\") pod \"redhat-marketplace-dzlgp\" (UID: \"e625b772-ed05-45a7-915a-94bc5a59a6e5\") " pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.625933 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pscrx\" (UniqueName: \"kubernetes.io/projected/e625b772-ed05-45a7-915a-94bc5a59a6e5-kube-api-access-pscrx\") pod \"redhat-marketplace-dzlgp\" (UID: \"e625b772-ed05-45a7-915a-94bc5a59a6e5\") " pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.728139 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e625b772-ed05-45a7-915a-94bc5a59a6e5-catalog-content\") pod \"redhat-marketplace-dzlgp\" (UID: \"e625b772-ed05-45a7-915a-94bc5a59a6e5\") " pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.728470 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pscrx\" (UniqueName: \"kubernetes.io/projected/e625b772-ed05-45a7-915a-94bc5a59a6e5-kube-api-access-pscrx\") pod \"redhat-marketplace-dzlgp\" (UID: \"e625b772-ed05-45a7-915a-94bc5a59a6e5\") " pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.728662 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e625b772-ed05-45a7-915a-94bc5a59a6e5-utilities\") pod \"redhat-marketplace-dzlgp\" (UID: \"e625b772-ed05-45a7-915a-94bc5a59a6e5\") " pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.728678 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e625b772-ed05-45a7-915a-94bc5a59a6e5-catalog-content\") pod \"redhat-marketplace-dzlgp\" (UID: \"e625b772-ed05-45a7-915a-94bc5a59a6e5\") " pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.729289 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e625b772-ed05-45a7-915a-94bc5a59a6e5-utilities\") pod \"redhat-marketplace-dzlgp\" (UID: \"e625b772-ed05-45a7-915a-94bc5a59a6e5\") " pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.774178 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pscrx\" (UniqueName: \"kubernetes.io/projected/e625b772-ed05-45a7-915a-94bc5a59a6e5-kube-api-access-pscrx\") pod \"redhat-marketplace-dzlgp\" (UID: \"e625b772-ed05-45a7-915a-94bc5a59a6e5\") " pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:25 crc kubenswrapper[4797]: I0216 11:44:25.897526 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:26 crc kubenswrapper[4797]: I0216 11:44:26.392849 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dzlgp"] Feb 16 11:44:26 crc kubenswrapper[4797]: W0216 11:44:26.395771 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode625b772_ed05_45a7_915a_94bc5a59a6e5.slice/crio-42d0f96df51b38327227f64033b55dffa6b26af89ba05c0b0c33a5fd728071b9 WatchSource:0}: Error finding container 42d0f96df51b38327227f64033b55dffa6b26af89ba05c0b0c33a5fd728071b9: Status 404 returned error can't find the container with id 42d0f96df51b38327227f64033b55dffa6b26af89ba05c0b0c33a5fd728071b9 Feb 16 11:44:26 crc kubenswrapper[4797]: I0216 11:44:26.698959 4797 generic.go:334] "Generic (PLEG): container finished" podID="e625b772-ed05-45a7-915a-94bc5a59a6e5" containerID="73535d0b87b65657996ae40d6d6b624c39647201774b8c2ffe838771cab21c19" exitCode=0 Feb 16 11:44:26 crc kubenswrapper[4797]: I0216 11:44:26.699006 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dzlgp" event={"ID":"e625b772-ed05-45a7-915a-94bc5a59a6e5","Type":"ContainerDied","Data":"73535d0b87b65657996ae40d6d6b624c39647201774b8c2ffe838771cab21c19"} Feb 16 11:44:26 crc kubenswrapper[4797]: I0216 11:44:26.699033 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dzlgp" event={"ID":"e625b772-ed05-45a7-915a-94bc5a59a6e5","Type":"ContainerStarted","Data":"42d0f96df51b38327227f64033b55dffa6b26af89ba05c0b0c33a5fd728071b9"} Feb 16 11:44:27 crc kubenswrapper[4797]: I0216 11:44:27.710919 4797 generic.go:334] "Generic (PLEG): container finished" podID="e625b772-ed05-45a7-915a-94bc5a59a6e5" containerID="e0b6b6254aa3793d6564c225d84682490f1f0546a651022c49c6a7c46a0d1fa6" exitCode=0 Feb 16 11:44:27 crc kubenswrapper[4797]: I0216 11:44:27.710986 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dzlgp" event={"ID":"e625b772-ed05-45a7-915a-94bc5a59a6e5","Type":"ContainerDied","Data":"e0b6b6254aa3793d6564c225d84682490f1f0546a651022c49c6a7c46a0d1fa6"} Feb 16 11:44:27 crc kubenswrapper[4797]: E0216 11:44:27.985912 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:44:28 crc kubenswrapper[4797]: I0216 11:44:28.722556 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dzlgp" event={"ID":"e625b772-ed05-45a7-915a-94bc5a59a6e5","Type":"ContainerStarted","Data":"36735ecd86b63543845a669787947892edbc81cb059880a84f0a252815a2a4c0"} Feb 16 11:44:28 crc kubenswrapper[4797]: I0216 11:44:28.764704 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dzlgp" podStartSLOduration=2.34365954 podStartE2EDuration="3.764681099s" podCreationTimestamp="2026-02-16 11:44:25 +0000 UTC" firstStartedPulling="2026-02-16 11:44:26.700629164 +0000 UTC m=+2261.420814144" lastFinishedPulling="2026-02-16 11:44:28.121650723 +0000 UTC m=+2262.841835703" observedRunningTime="2026-02-16 11:44:28.756560037 +0000 UTC m=+2263.476745027" watchObservedRunningTime="2026-02-16 11:44:28.764681099 +0000 UTC m=+2263.484866079" Feb 16 11:44:35 crc kubenswrapper[4797]: I0216 11:44:35.898666 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:35 crc kubenswrapper[4797]: I0216 11:44:35.899400 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:35 crc kubenswrapper[4797]: I0216 11:44:35.941500 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:36 crc kubenswrapper[4797]: I0216 11:44:36.864273 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:36 crc kubenswrapper[4797]: I0216 11:44:36.913459 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dzlgp"] Feb 16 11:44:38 crc kubenswrapper[4797]: I0216 11:44:38.831424 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dzlgp" podUID="e625b772-ed05-45a7-915a-94bc5a59a6e5" containerName="registry-server" containerID="cri-o://36735ecd86b63543845a669787947892edbc81cb059880a84f0a252815a2a4c0" gracePeriod=2 Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.327176 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.434565 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e625b772-ed05-45a7-915a-94bc5a59a6e5-catalog-content\") pod \"e625b772-ed05-45a7-915a-94bc5a59a6e5\" (UID: \"e625b772-ed05-45a7-915a-94bc5a59a6e5\") " Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.434915 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pscrx\" (UniqueName: \"kubernetes.io/projected/e625b772-ed05-45a7-915a-94bc5a59a6e5-kube-api-access-pscrx\") pod \"e625b772-ed05-45a7-915a-94bc5a59a6e5\" (UID: \"e625b772-ed05-45a7-915a-94bc5a59a6e5\") " Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.435070 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e625b772-ed05-45a7-915a-94bc5a59a6e5-utilities\") pod \"e625b772-ed05-45a7-915a-94bc5a59a6e5\" (UID: \"e625b772-ed05-45a7-915a-94bc5a59a6e5\") " Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.435948 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e625b772-ed05-45a7-915a-94bc5a59a6e5-utilities" (OuterVolumeSpecName: "utilities") pod "e625b772-ed05-45a7-915a-94bc5a59a6e5" (UID: "e625b772-ed05-45a7-915a-94bc5a59a6e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.440544 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e625b772-ed05-45a7-915a-94bc5a59a6e5-kube-api-access-pscrx" (OuterVolumeSpecName: "kube-api-access-pscrx") pod "e625b772-ed05-45a7-915a-94bc5a59a6e5" (UID: "e625b772-ed05-45a7-915a-94bc5a59a6e5"). InnerVolumeSpecName "kube-api-access-pscrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.462201 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e625b772-ed05-45a7-915a-94bc5a59a6e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e625b772-ed05-45a7-915a-94bc5a59a6e5" (UID: "e625b772-ed05-45a7-915a-94bc5a59a6e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.537319 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e625b772-ed05-45a7-915a-94bc5a59a6e5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.537742 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pscrx\" (UniqueName: \"kubernetes.io/projected/e625b772-ed05-45a7-915a-94bc5a59a6e5-kube-api-access-pscrx\") on node \"crc\" DevicePath \"\"" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.537804 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e625b772-ed05-45a7-915a-94bc5a59a6e5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.840486 4797 generic.go:334] "Generic (PLEG): container finished" podID="e625b772-ed05-45a7-915a-94bc5a59a6e5" containerID="36735ecd86b63543845a669787947892edbc81cb059880a84f0a252815a2a4c0" exitCode=0 Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.840529 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dzlgp" event={"ID":"e625b772-ed05-45a7-915a-94bc5a59a6e5","Type":"ContainerDied","Data":"36735ecd86b63543845a669787947892edbc81cb059880a84f0a252815a2a4c0"} Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.840555 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dzlgp" event={"ID":"e625b772-ed05-45a7-915a-94bc5a59a6e5","Type":"ContainerDied","Data":"42d0f96df51b38327227f64033b55dffa6b26af89ba05c0b0c33a5fd728071b9"} Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.840571 4797 scope.go:117] "RemoveContainer" containerID="36735ecd86b63543845a669787947892edbc81cb059880a84f0a252815a2a4c0" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.841815 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dzlgp" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.860890 4797 scope.go:117] "RemoveContainer" containerID="e0b6b6254aa3793d6564c225d84682490f1f0546a651022c49c6a7c46a0d1fa6" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.878974 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dzlgp"] Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.888632 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dzlgp"] Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.898339 4797 scope.go:117] "RemoveContainer" containerID="73535d0b87b65657996ae40d6d6b624c39647201774b8c2ffe838771cab21c19" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.951632 4797 scope.go:117] "RemoveContainer" containerID="36735ecd86b63543845a669787947892edbc81cb059880a84f0a252815a2a4c0" Feb 16 11:44:39 crc kubenswrapper[4797]: E0216 11:44:39.952030 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36735ecd86b63543845a669787947892edbc81cb059880a84f0a252815a2a4c0\": container with ID starting with 36735ecd86b63543845a669787947892edbc81cb059880a84f0a252815a2a4c0 not found: ID does not exist" containerID="36735ecd86b63543845a669787947892edbc81cb059880a84f0a252815a2a4c0" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.952068 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36735ecd86b63543845a669787947892edbc81cb059880a84f0a252815a2a4c0"} err="failed to get container status \"36735ecd86b63543845a669787947892edbc81cb059880a84f0a252815a2a4c0\": rpc error: code = NotFound desc = could not find container \"36735ecd86b63543845a669787947892edbc81cb059880a84f0a252815a2a4c0\": container with ID starting with 36735ecd86b63543845a669787947892edbc81cb059880a84f0a252815a2a4c0 not found: ID does not exist" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.952090 4797 scope.go:117] "RemoveContainer" containerID="e0b6b6254aa3793d6564c225d84682490f1f0546a651022c49c6a7c46a0d1fa6" Feb 16 11:44:39 crc kubenswrapper[4797]: E0216 11:44:39.952367 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0b6b6254aa3793d6564c225d84682490f1f0546a651022c49c6a7c46a0d1fa6\": container with ID starting with e0b6b6254aa3793d6564c225d84682490f1f0546a651022c49c6a7c46a0d1fa6 not found: ID does not exist" containerID="e0b6b6254aa3793d6564c225d84682490f1f0546a651022c49c6a7c46a0d1fa6" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.952417 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b6b6254aa3793d6564c225d84682490f1f0546a651022c49c6a7c46a0d1fa6"} err="failed to get container status \"e0b6b6254aa3793d6564c225d84682490f1f0546a651022c49c6a7c46a0d1fa6\": rpc error: code = NotFound desc = could not find container \"e0b6b6254aa3793d6564c225d84682490f1f0546a651022c49c6a7c46a0d1fa6\": container with ID starting with e0b6b6254aa3793d6564c225d84682490f1f0546a651022c49c6a7c46a0d1fa6 not found: ID does not exist" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.952448 4797 scope.go:117] "RemoveContainer" containerID="73535d0b87b65657996ae40d6d6b624c39647201774b8c2ffe838771cab21c19" Feb 16 11:44:39 crc kubenswrapper[4797]: E0216 11:44:39.952747 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73535d0b87b65657996ae40d6d6b624c39647201774b8c2ffe838771cab21c19\": container with ID starting with 73535d0b87b65657996ae40d6d6b624c39647201774b8c2ffe838771cab21c19 not found: ID does not exist" containerID="73535d0b87b65657996ae40d6d6b624c39647201774b8c2ffe838771cab21c19" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.952775 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73535d0b87b65657996ae40d6d6b624c39647201774b8c2ffe838771cab21c19"} err="failed to get container status \"73535d0b87b65657996ae40d6d6b624c39647201774b8c2ffe838771cab21c19\": rpc error: code = NotFound desc = could not find container \"73535d0b87b65657996ae40d6d6b624c39647201774b8c2ffe838771cab21c19\": container with ID starting with 73535d0b87b65657996ae40d6d6b624c39647201774b8c2ffe838771cab21c19 not found: ID does not exist" Feb 16 11:44:39 crc kubenswrapper[4797]: I0216 11:44:39.994379 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e625b772-ed05-45a7-915a-94bc5a59a6e5" path="/var/lib/kubelet/pods/e625b772-ed05-45a7-915a-94bc5a59a6e5/volumes" Feb 16 11:44:40 crc kubenswrapper[4797]: E0216 11:44:40.984903 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:44:54 crc kubenswrapper[4797]: E0216 11:44:54.985080 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.165986 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5"] Feb 16 11:45:00 crc kubenswrapper[4797]: E0216 11:45:00.167097 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e625b772-ed05-45a7-915a-94bc5a59a6e5" containerName="extract-content" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.167113 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="e625b772-ed05-45a7-915a-94bc5a59a6e5" containerName="extract-content" Feb 16 11:45:00 crc kubenswrapper[4797]: E0216 11:45:00.167186 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e625b772-ed05-45a7-915a-94bc5a59a6e5" containerName="registry-server" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.167197 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="e625b772-ed05-45a7-915a-94bc5a59a6e5" containerName="registry-server" Feb 16 11:45:00 crc kubenswrapper[4797]: E0216 11:45:00.167222 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e625b772-ed05-45a7-915a-94bc5a59a6e5" containerName="extract-utilities" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.167230 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="e625b772-ed05-45a7-915a-94bc5a59a6e5" containerName="extract-utilities" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.167474 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="e625b772-ed05-45a7-915a-94bc5a59a6e5" containerName="registry-server" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.168416 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.172064 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.173137 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.174082 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c417a48-2fbe-4022-bf43-7a2b54936dd4-config-volume\") pod \"collect-profiles-29520705-9whf5\" (UID: \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.174373 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c417a48-2fbe-4022-bf43-7a2b54936dd4-secret-volume\") pod \"collect-profiles-29520705-9whf5\" (UID: \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.174551 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb9gp\" (UniqueName: \"kubernetes.io/projected/5c417a48-2fbe-4022-bf43-7a2b54936dd4-kube-api-access-gb9gp\") pod \"collect-profiles-29520705-9whf5\" (UID: \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.176366 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5"] Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.276268 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c417a48-2fbe-4022-bf43-7a2b54936dd4-config-volume\") pod \"collect-profiles-29520705-9whf5\" (UID: \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.276368 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c417a48-2fbe-4022-bf43-7a2b54936dd4-secret-volume\") pod \"collect-profiles-29520705-9whf5\" (UID: \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.276411 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb9gp\" (UniqueName: \"kubernetes.io/projected/5c417a48-2fbe-4022-bf43-7a2b54936dd4-kube-api-access-gb9gp\") pod \"collect-profiles-29520705-9whf5\" (UID: \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.277806 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c417a48-2fbe-4022-bf43-7a2b54936dd4-config-volume\") pod \"collect-profiles-29520705-9whf5\" (UID: \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.282652 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c417a48-2fbe-4022-bf43-7a2b54936dd4-secret-volume\") pod \"collect-profiles-29520705-9whf5\" (UID: \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.294747 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb9gp\" (UniqueName: \"kubernetes.io/projected/5c417a48-2fbe-4022-bf43-7a2b54936dd4-kube-api-access-gb9gp\") pod \"collect-profiles-29520705-9whf5\" (UID: \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" Feb 16 11:45:00 crc kubenswrapper[4797]: I0216 11:45:00.495994 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" Feb 16 11:45:01 crc kubenswrapper[4797]: I0216 11:45:01.017206 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5"] Feb 16 11:45:01 crc kubenswrapper[4797]: I0216 11:45:01.037972 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" event={"ID":"5c417a48-2fbe-4022-bf43-7a2b54936dd4","Type":"ContainerStarted","Data":"e42d303672c45f1f0078052ca66a9a07b74fe49f1a6fcdc0c2b21e5418df61d6"} Feb 16 11:45:02 crc kubenswrapper[4797]: I0216 11:45:02.048563 4797 generic.go:334] "Generic (PLEG): container finished" podID="5c417a48-2fbe-4022-bf43-7a2b54936dd4" containerID="1c87622f44505635f60fa5c1b4332f39da051ab2894cb7d536a77a59bd6e139c" exitCode=0 Feb 16 11:45:02 crc kubenswrapper[4797]: I0216 11:45:02.048812 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" event={"ID":"5c417a48-2fbe-4022-bf43-7a2b54936dd4","Type":"ContainerDied","Data":"1c87622f44505635f60fa5c1b4332f39da051ab2894cb7d536a77a59bd6e139c"} Feb 16 11:45:03 crc kubenswrapper[4797]: I0216 11:45:03.486931 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" Feb 16 11:45:03 crc kubenswrapper[4797]: I0216 11:45:03.541792 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c417a48-2fbe-4022-bf43-7a2b54936dd4-secret-volume\") pod \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\" (UID: \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\") " Feb 16 11:45:03 crc kubenswrapper[4797]: I0216 11:45:03.541994 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gb9gp\" (UniqueName: \"kubernetes.io/projected/5c417a48-2fbe-4022-bf43-7a2b54936dd4-kube-api-access-gb9gp\") pod \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\" (UID: \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\") " Feb 16 11:45:03 crc kubenswrapper[4797]: I0216 11:45:03.542048 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c417a48-2fbe-4022-bf43-7a2b54936dd4-config-volume\") pod \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\" (UID: \"5c417a48-2fbe-4022-bf43-7a2b54936dd4\") " Feb 16 11:45:03 crc kubenswrapper[4797]: I0216 11:45:03.542848 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c417a48-2fbe-4022-bf43-7a2b54936dd4-config-volume" (OuterVolumeSpecName: "config-volume") pod "5c417a48-2fbe-4022-bf43-7a2b54936dd4" (UID: "5c417a48-2fbe-4022-bf43-7a2b54936dd4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 11:45:03 crc kubenswrapper[4797]: I0216 11:45:03.543061 4797 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c417a48-2fbe-4022-bf43-7a2b54936dd4-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 11:45:03 crc kubenswrapper[4797]: I0216 11:45:03.549742 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c417a48-2fbe-4022-bf43-7a2b54936dd4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5c417a48-2fbe-4022-bf43-7a2b54936dd4" (UID: "5c417a48-2fbe-4022-bf43-7a2b54936dd4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 11:45:03 crc kubenswrapper[4797]: I0216 11:45:03.550250 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c417a48-2fbe-4022-bf43-7a2b54936dd4-kube-api-access-gb9gp" (OuterVolumeSpecName: "kube-api-access-gb9gp") pod "5c417a48-2fbe-4022-bf43-7a2b54936dd4" (UID: "5c417a48-2fbe-4022-bf43-7a2b54936dd4"). InnerVolumeSpecName "kube-api-access-gb9gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:45:03 crc kubenswrapper[4797]: I0216 11:45:03.644694 4797 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5c417a48-2fbe-4022-bf43-7a2b54936dd4-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 11:45:03 crc kubenswrapper[4797]: I0216 11:45:03.644739 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gb9gp\" (UniqueName: \"kubernetes.io/projected/5c417a48-2fbe-4022-bf43-7a2b54936dd4-kube-api-access-gb9gp\") on node \"crc\" DevicePath \"\"" Feb 16 11:45:04 crc kubenswrapper[4797]: I0216 11:45:04.085044 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" event={"ID":"5c417a48-2fbe-4022-bf43-7a2b54936dd4","Type":"ContainerDied","Data":"e42d303672c45f1f0078052ca66a9a07b74fe49f1a6fcdc0c2b21e5418df61d6"} Feb 16 11:45:04 crc kubenswrapper[4797]: I0216 11:45:04.085420 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e42d303672c45f1f0078052ca66a9a07b74fe49f1a6fcdc0c2b21e5418df61d6" Feb 16 11:45:04 crc kubenswrapper[4797]: I0216 11:45:04.085137 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520705-9whf5" Feb 16 11:45:04 crc kubenswrapper[4797]: I0216 11:45:04.570474 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr"] Feb 16 11:45:04 crc kubenswrapper[4797]: I0216 11:45:04.578432 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520660-pm2zr"] Feb 16 11:45:06 crc kubenswrapper[4797]: I0216 11:45:05.999830 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f673c7b-0916-4829-9630-1f927c932254" path="/var/lib/kubelet/pods/7f673c7b-0916-4829-9630-1f927c932254/volumes" Feb 16 11:45:07 crc kubenswrapper[4797]: E0216 11:45:07.985422 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:45:11 crc kubenswrapper[4797]: I0216 11:45:11.703507 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:45:11 crc kubenswrapper[4797]: I0216 11:45:11.704343 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:45:19 crc kubenswrapper[4797]: E0216 11:45:19.985013 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:45:33 crc kubenswrapper[4797]: E0216 11:45:33.987235 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:45:41 crc kubenswrapper[4797]: I0216 11:45:41.703090 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:45:41 crc kubenswrapper[4797]: I0216 11:45:41.703636 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:45:46 crc kubenswrapper[4797]: E0216 11:45:46.986193 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:45:59 crc kubenswrapper[4797]: E0216 11:45:59.984331 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:46:06 crc kubenswrapper[4797]: I0216 11:46:06.003087 4797 scope.go:117] "RemoveContainer" containerID="b939fafe32187235946cb441cc4979e645f2bba16ba046dd48c4fe719806a1d3" Feb 16 11:46:11 crc kubenswrapper[4797]: I0216 11:46:11.704026 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:46:11 crc kubenswrapper[4797]: I0216 11:46:11.704767 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:46:11 crc kubenswrapper[4797]: I0216 11:46:11.704853 4797 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:46:11 crc kubenswrapper[4797]: I0216 11:46:11.705873 4797 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8"} pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:46:11 crc kubenswrapper[4797]: I0216 11:46:11.706013 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" containerID="cri-o://fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" gracePeriod=600 Feb 16 11:46:11 crc kubenswrapper[4797]: E0216 11:46:11.832909 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:46:13 crc kubenswrapper[4797]: I0216 11:46:13.148587 4797 generic.go:334] "Generic (PLEG): container finished" podID="128f4e85-fd17-4281-97d2-872fda792b21" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" exitCode=0 Feb 16 11:46:13 crc kubenswrapper[4797]: I0216 11:46:13.148667 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerDied","Data":"fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8"} Feb 16 11:46:13 crc kubenswrapper[4797]: I0216 11:46:13.150419 4797 scope.go:117] "RemoveContainer" containerID="436dfb51bb84994be6a8f00425e5ad1ba117a367b8df143eb4e404d177a03be0" Feb 16 11:46:13 crc kubenswrapper[4797]: I0216 11:46:13.151370 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:46:13 crc kubenswrapper[4797]: E0216 11:46:13.151860 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:46:13 crc kubenswrapper[4797]: E0216 11:46:13.995933 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:46:23 crc kubenswrapper[4797]: I0216 11:46:23.982575 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:46:23 crc kubenswrapper[4797]: E0216 11:46:23.983327 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:46:26 crc kubenswrapper[4797]: E0216 11:46:26.984902 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:46:34 crc kubenswrapper[4797]: I0216 11:46:34.983629 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:46:34 crc kubenswrapper[4797]: E0216 11:46:34.984844 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:46:40 crc kubenswrapper[4797]: I0216 11:46:40.986361 4797 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 11:46:41 crc kubenswrapper[4797]: E0216 11:46:41.077329 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:46:41 crc kubenswrapper[4797]: E0216 11:46:41.077395 4797 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:46:41 crc kubenswrapper[4797]: E0216 11:46:41.077676 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fvxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dhgrw_openstack(895bed8d-c376-47ad-8fa6-3cf0f07399c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 11:46:41 crc kubenswrapper[4797]: E0216 11:46:41.079474 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:46:46 crc kubenswrapper[4797]: I0216 11:46:46.984154 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:46:46 crc kubenswrapper[4797]: E0216 11:46:46.984911 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:46:51 crc kubenswrapper[4797]: E0216 11:46:51.985881 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:47:00 crc kubenswrapper[4797]: I0216 11:47:00.983402 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:47:00 crc kubenswrapper[4797]: E0216 11:47:00.984414 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:47:02 crc kubenswrapper[4797]: E0216 11:47:02.986877 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:47:14 crc kubenswrapper[4797]: I0216 11:47:14.984146 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:47:14 crc kubenswrapper[4797]: E0216 11:47:14.985210 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:47:14 crc kubenswrapper[4797]: E0216 11:47:14.986681 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:47:25 crc kubenswrapper[4797]: E0216 11:47:25.998844 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:47:28 crc kubenswrapper[4797]: I0216 11:47:28.982761 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:47:28 crc kubenswrapper[4797]: E0216 11:47:28.983134 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:47:38 crc kubenswrapper[4797]: E0216 11:47:38.986784 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:47:40 crc kubenswrapper[4797]: I0216 11:47:40.983547 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:47:40 crc kubenswrapper[4797]: E0216 11:47:40.985276 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:47:51 crc kubenswrapper[4797]: E0216 11:47:51.985726 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:47:54 crc kubenswrapper[4797]: I0216 11:47:54.982671 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:47:54 crc kubenswrapper[4797]: E0216 11:47:54.983084 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:48:06 crc kubenswrapper[4797]: E0216 11:48:06.988230 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:48:09 crc kubenswrapper[4797]: I0216 11:48:09.982892 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:48:09 crc kubenswrapper[4797]: E0216 11:48:09.983403 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:48:18 crc kubenswrapper[4797]: E0216 11:48:18.984787 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:48:20 crc kubenswrapper[4797]: I0216 11:48:20.983231 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:48:20 crc kubenswrapper[4797]: E0216 11:48:20.983840 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:48:30 crc kubenswrapper[4797]: E0216 11:48:30.984409 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:48:33 crc kubenswrapper[4797]: I0216 11:48:33.983111 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:48:33 crc kubenswrapper[4797]: E0216 11:48:33.983737 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:48:45 crc kubenswrapper[4797]: E0216 11:48:45.991358 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:48:48 crc kubenswrapper[4797]: I0216 11:48:48.982912 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:48:48 crc kubenswrapper[4797]: E0216 11:48:48.983376 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:48:59 crc kubenswrapper[4797]: I0216 11:48:59.982976 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:48:59 crc kubenswrapper[4797]: E0216 11:48:59.983767 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:48:59 crc kubenswrapper[4797]: E0216 11:48:59.984275 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.307963 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kxcwd"] Feb 16 11:49:02 crc kubenswrapper[4797]: E0216 11:49:02.308833 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c417a48-2fbe-4022-bf43-7a2b54936dd4" containerName="collect-profiles" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.308877 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c417a48-2fbe-4022-bf43-7a2b54936dd4" containerName="collect-profiles" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.309092 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c417a48-2fbe-4022-bf43-7a2b54936dd4" containerName="collect-profiles" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.310491 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.344703 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kxcwd"] Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.371260 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2fl9\" (UniqueName: \"kubernetes.io/projected/ca08b349-192d-45b3-a2a2-b5f18bc705c3-kube-api-access-w2fl9\") pod \"community-operators-kxcwd\" (UID: \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\") " pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.371626 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca08b349-192d-45b3-a2a2-b5f18bc705c3-catalog-content\") pod \"community-operators-kxcwd\" (UID: \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\") " pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.371723 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca08b349-192d-45b3-a2a2-b5f18bc705c3-utilities\") pod \"community-operators-kxcwd\" (UID: \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\") " pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.474334 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca08b349-192d-45b3-a2a2-b5f18bc705c3-catalog-content\") pod \"community-operators-kxcwd\" (UID: \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\") " pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.474405 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca08b349-192d-45b3-a2a2-b5f18bc705c3-utilities\") pod \"community-operators-kxcwd\" (UID: \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\") " pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.474625 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2fl9\" (UniqueName: \"kubernetes.io/projected/ca08b349-192d-45b3-a2a2-b5f18bc705c3-kube-api-access-w2fl9\") pod \"community-operators-kxcwd\" (UID: \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\") " pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.475004 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca08b349-192d-45b3-a2a2-b5f18bc705c3-catalog-content\") pod \"community-operators-kxcwd\" (UID: \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\") " pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.475040 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca08b349-192d-45b3-a2a2-b5f18bc705c3-utilities\") pod \"community-operators-kxcwd\" (UID: \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\") " pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.493445 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2fl9\" (UniqueName: \"kubernetes.io/projected/ca08b349-192d-45b3-a2a2-b5f18bc705c3-kube-api-access-w2fl9\") pod \"community-operators-kxcwd\" (UID: \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\") " pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:02 crc kubenswrapper[4797]: I0216 11:49:02.644691 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:03 crc kubenswrapper[4797]: I0216 11:49:03.286800 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kxcwd"] Feb 16 11:49:03 crc kubenswrapper[4797]: I0216 11:49:03.908708 4797 generic.go:334] "Generic (PLEG): container finished" podID="ca08b349-192d-45b3-a2a2-b5f18bc705c3" containerID="69fe800725f7fd27d378472f19615a261a09b2840d05631ea3f2c0e3f72278dc" exitCode=0 Feb 16 11:49:03 crc kubenswrapper[4797]: I0216 11:49:03.908775 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxcwd" event={"ID":"ca08b349-192d-45b3-a2a2-b5f18bc705c3","Type":"ContainerDied","Data":"69fe800725f7fd27d378472f19615a261a09b2840d05631ea3f2c0e3f72278dc"} Feb 16 11:49:03 crc kubenswrapper[4797]: I0216 11:49:03.909042 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxcwd" event={"ID":"ca08b349-192d-45b3-a2a2-b5f18bc705c3","Type":"ContainerStarted","Data":"f1cca3ca378cd181f4dd1d9f4e0ee983d1e4c0ad6a4fadb7d1b7b7c262170d33"} Feb 16 11:49:04 crc kubenswrapper[4797]: I0216 11:49:04.920896 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxcwd" event={"ID":"ca08b349-192d-45b3-a2a2-b5f18bc705c3","Type":"ContainerStarted","Data":"3757db46b606e309c92f77eaaf8f255c13d47cc444d23690c30d6d8c77f12613"} Feb 16 11:49:05 crc kubenswrapper[4797]: I0216 11:49:05.930320 4797 generic.go:334] "Generic (PLEG): container finished" podID="ca08b349-192d-45b3-a2a2-b5f18bc705c3" containerID="3757db46b606e309c92f77eaaf8f255c13d47cc444d23690c30d6d8c77f12613" exitCode=0 Feb 16 11:49:05 crc kubenswrapper[4797]: I0216 11:49:05.930395 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxcwd" event={"ID":"ca08b349-192d-45b3-a2a2-b5f18bc705c3","Type":"ContainerDied","Data":"3757db46b606e309c92f77eaaf8f255c13d47cc444d23690c30d6d8c77f12613"} Feb 16 11:49:06 crc kubenswrapper[4797]: I0216 11:49:06.940836 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxcwd" event={"ID":"ca08b349-192d-45b3-a2a2-b5f18bc705c3","Type":"ContainerStarted","Data":"07d067762f5f614c7a7118be0a1b564a2e608582634811a760521ca3e71694af"} Feb 16 11:49:06 crc kubenswrapper[4797]: I0216 11:49:06.962367 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kxcwd" podStartSLOduration=2.258520337 podStartE2EDuration="4.962341768s" podCreationTimestamp="2026-02-16 11:49:02 +0000 UTC" firstStartedPulling="2026-02-16 11:49:03.91029882 +0000 UTC m=+2538.630483800" lastFinishedPulling="2026-02-16 11:49:06.614120251 +0000 UTC m=+2541.334305231" observedRunningTime="2026-02-16 11:49:06.958987817 +0000 UTC m=+2541.679172797" watchObservedRunningTime="2026-02-16 11:49:06.962341768 +0000 UTC m=+2541.682526748" Feb 16 11:49:10 crc kubenswrapper[4797]: I0216 11:49:10.982723 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:49:10 crc kubenswrapper[4797]: E0216 11:49:10.983334 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:49:11 crc kubenswrapper[4797]: E0216 11:49:11.985252 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:49:12 crc kubenswrapper[4797]: I0216 11:49:12.645381 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:12 crc kubenswrapper[4797]: I0216 11:49:12.645461 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:12 crc kubenswrapper[4797]: I0216 11:49:12.726839 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:13 crc kubenswrapper[4797]: I0216 11:49:13.085956 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:16 crc kubenswrapper[4797]: I0216 11:49:16.294619 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kxcwd"] Feb 16 11:49:16 crc kubenswrapper[4797]: I0216 11:49:16.295372 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kxcwd" podUID="ca08b349-192d-45b3-a2a2-b5f18bc705c3" containerName="registry-server" containerID="cri-o://07d067762f5f614c7a7118be0a1b564a2e608582634811a760521ca3e71694af" gracePeriod=2 Feb 16 11:49:16 crc kubenswrapper[4797]: I0216 11:49:16.726923 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:16 crc kubenswrapper[4797]: I0216 11:49:16.794817 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2fl9\" (UniqueName: \"kubernetes.io/projected/ca08b349-192d-45b3-a2a2-b5f18bc705c3-kube-api-access-w2fl9\") pod \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\" (UID: \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\") " Feb 16 11:49:16 crc kubenswrapper[4797]: I0216 11:49:16.794953 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca08b349-192d-45b3-a2a2-b5f18bc705c3-utilities\") pod \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\" (UID: \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\") " Feb 16 11:49:16 crc kubenswrapper[4797]: I0216 11:49:16.794977 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca08b349-192d-45b3-a2a2-b5f18bc705c3-catalog-content\") pod \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\" (UID: \"ca08b349-192d-45b3-a2a2-b5f18bc705c3\") " Feb 16 11:49:16 crc kubenswrapper[4797]: I0216 11:49:16.796118 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca08b349-192d-45b3-a2a2-b5f18bc705c3-utilities" (OuterVolumeSpecName: "utilities") pod "ca08b349-192d-45b3-a2a2-b5f18bc705c3" (UID: "ca08b349-192d-45b3-a2a2-b5f18bc705c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:49:16 crc kubenswrapper[4797]: I0216 11:49:16.801217 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca08b349-192d-45b3-a2a2-b5f18bc705c3-kube-api-access-w2fl9" (OuterVolumeSpecName: "kube-api-access-w2fl9") pod "ca08b349-192d-45b3-a2a2-b5f18bc705c3" (UID: "ca08b349-192d-45b3-a2a2-b5f18bc705c3"). InnerVolumeSpecName "kube-api-access-w2fl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:49:16 crc kubenswrapper[4797]: I0216 11:49:16.849885 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca08b349-192d-45b3-a2a2-b5f18bc705c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca08b349-192d-45b3-a2a2-b5f18bc705c3" (UID: "ca08b349-192d-45b3-a2a2-b5f18bc705c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:49:16 crc kubenswrapper[4797]: I0216 11:49:16.897426 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2fl9\" (UniqueName: \"kubernetes.io/projected/ca08b349-192d-45b3-a2a2-b5f18bc705c3-kube-api-access-w2fl9\") on node \"crc\" DevicePath \"\"" Feb 16 11:49:16 crc kubenswrapper[4797]: I0216 11:49:16.897465 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca08b349-192d-45b3-a2a2-b5f18bc705c3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:49:16 crc kubenswrapper[4797]: I0216 11:49:16.897474 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca08b349-192d-45b3-a2a2-b5f18bc705c3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.045694 4797 generic.go:334] "Generic (PLEG): container finished" podID="ca08b349-192d-45b3-a2a2-b5f18bc705c3" containerID="07d067762f5f614c7a7118be0a1b564a2e608582634811a760521ca3e71694af" exitCode=0 Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.045741 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxcwd" event={"ID":"ca08b349-192d-45b3-a2a2-b5f18bc705c3","Type":"ContainerDied","Data":"07d067762f5f614c7a7118be0a1b564a2e608582634811a760521ca3e71694af"} Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.045781 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kxcwd" Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.045806 4797 scope.go:117] "RemoveContainer" containerID="07d067762f5f614c7a7118be0a1b564a2e608582634811a760521ca3e71694af" Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.045793 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kxcwd" event={"ID":"ca08b349-192d-45b3-a2a2-b5f18bc705c3","Type":"ContainerDied","Data":"f1cca3ca378cd181f4dd1d9f4e0ee983d1e4c0ad6a4fadb7d1b7b7c262170d33"} Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.068335 4797 scope.go:117] "RemoveContainer" containerID="3757db46b606e309c92f77eaaf8f255c13d47cc444d23690c30d6d8c77f12613" Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.090595 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kxcwd"] Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.102532 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kxcwd"] Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.116093 4797 scope.go:117] "RemoveContainer" containerID="69fe800725f7fd27d378472f19615a261a09b2840d05631ea3f2c0e3f72278dc" Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.146799 4797 scope.go:117] "RemoveContainer" containerID="07d067762f5f614c7a7118be0a1b564a2e608582634811a760521ca3e71694af" Feb 16 11:49:17 crc kubenswrapper[4797]: E0216 11:49:17.147180 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d067762f5f614c7a7118be0a1b564a2e608582634811a760521ca3e71694af\": container with ID starting with 07d067762f5f614c7a7118be0a1b564a2e608582634811a760521ca3e71694af not found: ID does not exist" containerID="07d067762f5f614c7a7118be0a1b564a2e608582634811a760521ca3e71694af" Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.147218 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d067762f5f614c7a7118be0a1b564a2e608582634811a760521ca3e71694af"} err="failed to get container status \"07d067762f5f614c7a7118be0a1b564a2e608582634811a760521ca3e71694af\": rpc error: code = NotFound desc = could not find container \"07d067762f5f614c7a7118be0a1b564a2e608582634811a760521ca3e71694af\": container with ID starting with 07d067762f5f614c7a7118be0a1b564a2e608582634811a760521ca3e71694af not found: ID does not exist" Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.147244 4797 scope.go:117] "RemoveContainer" containerID="3757db46b606e309c92f77eaaf8f255c13d47cc444d23690c30d6d8c77f12613" Feb 16 11:49:17 crc kubenswrapper[4797]: E0216 11:49:17.147546 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3757db46b606e309c92f77eaaf8f255c13d47cc444d23690c30d6d8c77f12613\": container with ID starting with 3757db46b606e309c92f77eaaf8f255c13d47cc444d23690c30d6d8c77f12613 not found: ID does not exist" containerID="3757db46b606e309c92f77eaaf8f255c13d47cc444d23690c30d6d8c77f12613" Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.147572 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3757db46b606e309c92f77eaaf8f255c13d47cc444d23690c30d6d8c77f12613"} err="failed to get container status \"3757db46b606e309c92f77eaaf8f255c13d47cc444d23690c30d6d8c77f12613\": rpc error: code = NotFound desc = could not find container \"3757db46b606e309c92f77eaaf8f255c13d47cc444d23690c30d6d8c77f12613\": container with ID starting with 3757db46b606e309c92f77eaaf8f255c13d47cc444d23690c30d6d8c77f12613 not found: ID does not exist" Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.147611 4797 scope.go:117] "RemoveContainer" containerID="69fe800725f7fd27d378472f19615a261a09b2840d05631ea3f2c0e3f72278dc" Feb 16 11:49:17 crc kubenswrapper[4797]: E0216 11:49:17.147968 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69fe800725f7fd27d378472f19615a261a09b2840d05631ea3f2c0e3f72278dc\": container with ID starting with 69fe800725f7fd27d378472f19615a261a09b2840d05631ea3f2c0e3f72278dc not found: ID does not exist" containerID="69fe800725f7fd27d378472f19615a261a09b2840d05631ea3f2c0e3f72278dc" Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.147998 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69fe800725f7fd27d378472f19615a261a09b2840d05631ea3f2c0e3f72278dc"} err="failed to get container status \"69fe800725f7fd27d378472f19615a261a09b2840d05631ea3f2c0e3f72278dc\": rpc error: code = NotFound desc = could not find container \"69fe800725f7fd27d378472f19615a261a09b2840d05631ea3f2c0e3f72278dc\": container with ID starting with 69fe800725f7fd27d378472f19615a261a09b2840d05631ea3f2c0e3f72278dc not found: ID does not exist" Feb 16 11:49:17 crc kubenswrapper[4797]: I0216 11:49:17.996192 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca08b349-192d-45b3-a2a2-b5f18bc705c3" path="/var/lib/kubelet/pods/ca08b349-192d-45b3-a2a2-b5f18bc705c3/volumes" Feb 16 11:49:21 crc kubenswrapper[4797]: I0216 11:49:21.983017 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:49:21 crc kubenswrapper[4797]: E0216 11:49:21.984003 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:49:25 crc kubenswrapper[4797]: E0216 11:49:25.993735 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:49:33 crc kubenswrapper[4797]: I0216 11:49:33.983286 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:49:33 crc kubenswrapper[4797]: E0216 11:49:33.984415 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:49:37 crc kubenswrapper[4797]: E0216 11:49:37.984991 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:49:48 crc kubenswrapper[4797]: I0216 11:49:48.983300 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:49:48 crc kubenswrapper[4797]: E0216 11:49:48.985337 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:49:52 crc kubenswrapper[4797]: E0216 11:49:52.985338 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:50:01 crc kubenswrapper[4797]: I0216 11:50:01.982642 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:50:01 crc kubenswrapper[4797]: E0216 11:50:01.983324 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:50:07 crc kubenswrapper[4797]: E0216 11:50:07.985340 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:50:16 crc kubenswrapper[4797]: I0216 11:50:16.982264 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:50:16 crc kubenswrapper[4797]: E0216 11:50:16.983061 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:50:21 crc kubenswrapper[4797]: E0216 11:50:21.985723 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:50:31 crc kubenswrapper[4797]: I0216 11:50:31.983263 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:50:31 crc kubenswrapper[4797]: E0216 11:50:31.984035 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:50:32 crc kubenswrapper[4797]: E0216 11:50:32.987281 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:50:44 crc kubenswrapper[4797]: E0216 11:50:44.985359 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:50:46 crc kubenswrapper[4797]: I0216 11:50:46.012744 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:50:46 crc kubenswrapper[4797]: E0216 11:50:46.014191 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:50:58 crc kubenswrapper[4797]: E0216 11:50:58.985842 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:51:00 crc kubenswrapper[4797]: I0216 11:51:00.983573 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:51:00 crc kubenswrapper[4797]: E0216 11:51:00.984361 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:51:09 crc kubenswrapper[4797]: E0216 11:51:09.985894 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:51:15 crc kubenswrapper[4797]: I0216 11:51:15.990087 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:51:16 crc kubenswrapper[4797]: I0216 11:51:16.377118 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"466afe6c2e87c1336c7e2ffc0baf6756f6e411f9783bf938aa2d97f93e10afd0"} Feb 16 11:51:22 crc kubenswrapper[4797]: E0216 11:51:22.986157 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:51:35 crc kubenswrapper[4797]: E0216 11:51:35.992774 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:51:47 crc kubenswrapper[4797]: I0216 11:51:47.991047 4797 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 11:51:48 crc kubenswrapper[4797]: E0216 11:51:48.122835 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:51:48 crc kubenswrapper[4797]: E0216 11:51:48.122899 4797 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:51:48 crc kubenswrapper[4797]: E0216 11:51:48.123036 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fvxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dhgrw_openstack(895bed8d-c376-47ad-8fa6-3cf0f07399c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 11:51:48 crc kubenswrapper[4797]: E0216 11:51:48.124200 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:52:00 crc kubenswrapper[4797]: E0216 11:52:00.985054 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:52:14 crc kubenswrapper[4797]: E0216 11:52:14.984681 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:52:29 crc kubenswrapper[4797]: E0216 11:52:29.985001 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:52:40 crc kubenswrapper[4797]: E0216 11:52:40.984716 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:52:54 crc kubenswrapper[4797]: E0216 11:52:54.986770 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:53:07 crc kubenswrapper[4797]: E0216 11:53:07.985411 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:53:18 crc kubenswrapper[4797]: E0216 11:53:18.985255 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:53:30 crc kubenswrapper[4797]: E0216 11:53:30.987725 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:53:41 crc kubenswrapper[4797]: I0216 11:53:41.703820 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:53:41 crc kubenswrapper[4797]: I0216 11:53:41.704333 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:53:45 crc kubenswrapper[4797]: E0216 11:53:45.992434 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:53:59 crc kubenswrapper[4797]: E0216 11:53:59.986112 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:54:10 crc kubenswrapper[4797]: E0216 11:54:10.985944 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:54:11 crc kubenswrapper[4797]: I0216 11:54:11.703306 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:54:11 crc kubenswrapper[4797]: I0216 11:54:11.703363 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:54:22 crc kubenswrapper[4797]: E0216 11:54:22.002825 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:54:32 crc kubenswrapper[4797]: E0216 11:54:32.985348 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.245117 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mf9vn/must-gather-mshdc"] Feb 16 11:54:38 crc kubenswrapper[4797]: E0216 11:54:38.246427 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca08b349-192d-45b3-a2a2-b5f18bc705c3" containerName="extract-content" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.246454 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca08b349-192d-45b3-a2a2-b5f18bc705c3" containerName="extract-content" Feb 16 11:54:38 crc kubenswrapper[4797]: E0216 11:54:38.246485 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca08b349-192d-45b3-a2a2-b5f18bc705c3" containerName="extract-utilities" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.246497 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca08b349-192d-45b3-a2a2-b5f18bc705c3" containerName="extract-utilities" Feb 16 11:54:38 crc kubenswrapper[4797]: E0216 11:54:38.246522 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca08b349-192d-45b3-a2a2-b5f18bc705c3" containerName="registry-server" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.246534 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca08b349-192d-45b3-a2a2-b5f18bc705c3" containerName="registry-server" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.246909 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca08b349-192d-45b3-a2a2-b5f18bc705c3" containerName="registry-server" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.248865 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mf9vn/must-gather-mshdc" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.256725 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mf9vn"/"openshift-service-ca.crt" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.259936 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-mf9vn"/"kube-root-ca.crt" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.266653 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f01e0079-0175-4188-a990-79451d57b8d0-must-gather-output\") pod \"must-gather-mshdc\" (UID: \"f01e0079-0175-4188-a990-79451d57b8d0\") " pod="openshift-must-gather-mf9vn/must-gather-mshdc" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.266751 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqfh\" (UniqueName: \"kubernetes.io/projected/f01e0079-0175-4188-a990-79451d57b8d0-kube-api-access-bbqfh\") pod \"must-gather-mshdc\" (UID: \"f01e0079-0175-4188-a990-79451d57b8d0\") " pod="openshift-must-gather-mf9vn/must-gather-mshdc" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.267446 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mf9vn/must-gather-mshdc"] Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.368217 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f01e0079-0175-4188-a990-79451d57b8d0-must-gather-output\") pod \"must-gather-mshdc\" (UID: \"f01e0079-0175-4188-a990-79451d57b8d0\") " pod="openshift-must-gather-mf9vn/must-gather-mshdc" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.368283 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqfh\" (UniqueName: \"kubernetes.io/projected/f01e0079-0175-4188-a990-79451d57b8d0-kube-api-access-bbqfh\") pod \"must-gather-mshdc\" (UID: \"f01e0079-0175-4188-a990-79451d57b8d0\") " pod="openshift-must-gather-mf9vn/must-gather-mshdc" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.368666 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f01e0079-0175-4188-a990-79451d57b8d0-must-gather-output\") pod \"must-gather-mshdc\" (UID: \"f01e0079-0175-4188-a990-79451d57b8d0\") " pod="openshift-must-gather-mf9vn/must-gather-mshdc" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.384538 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqfh\" (UniqueName: \"kubernetes.io/projected/f01e0079-0175-4188-a990-79451d57b8d0-kube-api-access-bbqfh\") pod \"must-gather-mshdc\" (UID: \"f01e0079-0175-4188-a990-79451d57b8d0\") " pod="openshift-must-gather-mf9vn/must-gather-mshdc" Feb 16 11:54:38 crc kubenswrapper[4797]: I0216 11:54:38.572577 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mf9vn/must-gather-mshdc" Feb 16 11:54:39 crc kubenswrapper[4797]: I0216 11:54:39.038307 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-mf9vn/must-gather-mshdc"] Feb 16 11:54:39 crc kubenswrapper[4797]: I0216 11:54:39.467112 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mf9vn/must-gather-mshdc" event={"ID":"f01e0079-0175-4188-a990-79451d57b8d0","Type":"ContainerStarted","Data":"88f4b1aca543ec98e3b9dc2cd60b1c65ad7763d051225c1924d7bd8ce9234204"} Feb 16 11:54:41 crc kubenswrapper[4797]: I0216 11:54:41.705490 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:54:41 crc kubenswrapper[4797]: I0216 11:54:41.706086 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:54:41 crc kubenswrapper[4797]: I0216 11:54:41.706143 4797 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:54:41 crc kubenswrapper[4797]: I0216 11:54:41.707000 4797 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"466afe6c2e87c1336c7e2ffc0baf6756f6e411f9783bf938aa2d97f93e10afd0"} pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:54:41 crc kubenswrapper[4797]: I0216 11:54:41.707054 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" containerID="cri-o://466afe6c2e87c1336c7e2ffc0baf6756f6e411f9783bf938aa2d97f93e10afd0" gracePeriod=600 Feb 16 11:54:42 crc kubenswrapper[4797]: I0216 11:54:42.501659 4797 generic.go:334] "Generic (PLEG): container finished" podID="128f4e85-fd17-4281-97d2-872fda792b21" containerID="466afe6c2e87c1336c7e2ffc0baf6756f6e411f9783bf938aa2d97f93e10afd0" exitCode=0 Feb 16 11:54:42 crc kubenswrapper[4797]: I0216 11:54:42.501712 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerDied","Data":"466afe6c2e87c1336c7e2ffc0baf6756f6e411f9783bf938aa2d97f93e10afd0"} Feb 16 11:54:42 crc kubenswrapper[4797]: I0216 11:54:42.501748 4797 scope.go:117] "RemoveContainer" containerID="fdd7481222b3cf53aaf50b90380acb89f7b2860b9509802a1a09dd3e8c8fc9a8" Feb 16 11:54:45 crc kubenswrapper[4797]: I0216 11:54:45.531114 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa"} Feb 16 11:54:45 crc kubenswrapper[4797]: I0216 11:54:45.532762 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mf9vn/must-gather-mshdc" event={"ID":"f01e0079-0175-4188-a990-79451d57b8d0","Type":"ContainerStarted","Data":"55ffd266fd5679be314ec1e754883cc68fd5eef87342c0d50fcf95af589d0369"} Feb 16 11:54:46 crc kubenswrapper[4797]: I0216 11:54:46.546784 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mf9vn/must-gather-mshdc" event={"ID":"f01e0079-0175-4188-a990-79451d57b8d0","Type":"ContainerStarted","Data":"fd1ac09bdabd3dab27fa2a51913dc32279adaeb3010191059475e89ccfee4030"} Feb 16 11:54:46 crc kubenswrapper[4797]: I0216 11:54:46.569532 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mf9vn/must-gather-mshdc" podStartSLOduration=2.541391846 podStartE2EDuration="8.569511444s" podCreationTimestamp="2026-02-16 11:54:38 +0000 UTC" firstStartedPulling="2026-02-16 11:54:39.044101568 +0000 UTC m=+2873.764286548" lastFinishedPulling="2026-02-16 11:54:45.072221176 +0000 UTC m=+2879.792406146" observedRunningTime="2026-02-16 11:54:46.559886102 +0000 UTC m=+2881.280071082" watchObservedRunningTime="2026-02-16 11:54:46.569511444 +0000 UTC m=+2881.289696424" Feb 16 11:54:47 crc kubenswrapper[4797]: E0216 11:54:47.986939 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:54:50 crc kubenswrapper[4797]: I0216 11:54:50.433813 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mf9vn/crc-debug-gxf2f"] Feb 16 11:54:50 crc kubenswrapper[4797]: I0216 11:54:50.436289 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" Feb 16 11:54:50 crc kubenswrapper[4797]: I0216 11:54:50.438214 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mf9vn"/"default-dockercfg-w7bn5" Feb 16 11:54:50 crc kubenswrapper[4797]: I0216 11:54:50.571537 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d59669d1-df50-4b30-b5e5-91cfca97c27e-host\") pod \"crc-debug-gxf2f\" (UID: \"d59669d1-df50-4b30-b5e5-91cfca97c27e\") " pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" Feb 16 11:54:50 crc kubenswrapper[4797]: I0216 11:54:50.572263 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2wwk\" (UniqueName: \"kubernetes.io/projected/d59669d1-df50-4b30-b5e5-91cfca97c27e-kube-api-access-l2wwk\") pod \"crc-debug-gxf2f\" (UID: \"d59669d1-df50-4b30-b5e5-91cfca97c27e\") " pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" Feb 16 11:54:50 crc kubenswrapper[4797]: I0216 11:54:50.674760 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2wwk\" (UniqueName: \"kubernetes.io/projected/d59669d1-df50-4b30-b5e5-91cfca97c27e-kube-api-access-l2wwk\") pod \"crc-debug-gxf2f\" (UID: \"d59669d1-df50-4b30-b5e5-91cfca97c27e\") " pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" Feb 16 11:54:50 crc kubenswrapper[4797]: I0216 11:54:50.674864 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d59669d1-df50-4b30-b5e5-91cfca97c27e-host\") pod \"crc-debug-gxf2f\" (UID: \"d59669d1-df50-4b30-b5e5-91cfca97c27e\") " pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" Feb 16 11:54:50 crc kubenswrapper[4797]: I0216 11:54:50.675051 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d59669d1-df50-4b30-b5e5-91cfca97c27e-host\") pod \"crc-debug-gxf2f\" (UID: \"d59669d1-df50-4b30-b5e5-91cfca97c27e\") " pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" Feb 16 11:54:50 crc kubenswrapper[4797]: I0216 11:54:50.694198 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2wwk\" (UniqueName: \"kubernetes.io/projected/d59669d1-df50-4b30-b5e5-91cfca97c27e-kube-api-access-l2wwk\") pod \"crc-debug-gxf2f\" (UID: \"d59669d1-df50-4b30-b5e5-91cfca97c27e\") " pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" Feb 16 11:54:50 crc kubenswrapper[4797]: I0216 11:54:50.756182 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" Feb 16 11:54:51 crc kubenswrapper[4797]: I0216 11:54:51.599548 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" event={"ID":"d59669d1-df50-4b30-b5e5-91cfca97c27e","Type":"ContainerStarted","Data":"2ef51e1ede2ccb4605b1af6305be6e595d46679a0c96cc755db31e37dc6e1992"} Feb 16 11:54:57 crc kubenswrapper[4797]: I0216 11:54:57.207054 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5ndl9"] Feb 16 11:54:57 crc kubenswrapper[4797]: I0216 11:54:57.211564 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:54:57 crc kubenswrapper[4797]: I0216 11:54:57.247533 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ndl9"] Feb 16 11:54:57 crc kubenswrapper[4797]: I0216 11:54:57.304525 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30b49e1-2d08-4ce5-b22f-949580b8951b-catalog-content\") pod \"redhat-marketplace-5ndl9\" (UID: \"d30b49e1-2d08-4ce5-b22f-949580b8951b\") " pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:54:57 crc kubenswrapper[4797]: I0216 11:54:57.304597 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq8rf\" (UniqueName: \"kubernetes.io/projected/d30b49e1-2d08-4ce5-b22f-949580b8951b-kube-api-access-qq8rf\") pod \"redhat-marketplace-5ndl9\" (UID: \"d30b49e1-2d08-4ce5-b22f-949580b8951b\") " pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:54:57 crc kubenswrapper[4797]: I0216 11:54:57.304753 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30b49e1-2d08-4ce5-b22f-949580b8951b-utilities\") pod \"redhat-marketplace-5ndl9\" (UID: \"d30b49e1-2d08-4ce5-b22f-949580b8951b\") " pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:54:57 crc kubenswrapper[4797]: I0216 11:54:57.406110 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30b49e1-2d08-4ce5-b22f-949580b8951b-utilities\") pod \"redhat-marketplace-5ndl9\" (UID: \"d30b49e1-2d08-4ce5-b22f-949580b8951b\") " pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:54:57 crc kubenswrapper[4797]: I0216 11:54:57.406172 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30b49e1-2d08-4ce5-b22f-949580b8951b-catalog-content\") pod \"redhat-marketplace-5ndl9\" (UID: \"d30b49e1-2d08-4ce5-b22f-949580b8951b\") " pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:54:57 crc kubenswrapper[4797]: I0216 11:54:57.406204 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq8rf\" (UniqueName: \"kubernetes.io/projected/d30b49e1-2d08-4ce5-b22f-949580b8951b-kube-api-access-qq8rf\") pod \"redhat-marketplace-5ndl9\" (UID: \"d30b49e1-2d08-4ce5-b22f-949580b8951b\") " pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:54:57 crc kubenswrapper[4797]: I0216 11:54:57.407136 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30b49e1-2d08-4ce5-b22f-949580b8951b-utilities\") pod \"redhat-marketplace-5ndl9\" (UID: \"d30b49e1-2d08-4ce5-b22f-949580b8951b\") " pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:54:57 crc kubenswrapper[4797]: I0216 11:54:57.407344 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30b49e1-2d08-4ce5-b22f-949580b8951b-catalog-content\") pod \"redhat-marketplace-5ndl9\" (UID: \"d30b49e1-2d08-4ce5-b22f-949580b8951b\") " pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:54:57 crc kubenswrapper[4797]: I0216 11:54:57.430460 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq8rf\" (UniqueName: \"kubernetes.io/projected/d30b49e1-2d08-4ce5-b22f-949580b8951b-kube-api-access-qq8rf\") pod \"redhat-marketplace-5ndl9\" (UID: \"d30b49e1-2d08-4ce5-b22f-949580b8951b\") " pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:54:57 crc kubenswrapper[4797]: I0216 11:54:57.536757 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:54:59 crc kubenswrapper[4797]: E0216 11:54:59.984394 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:55:02 crc kubenswrapper[4797]: I0216 11:55:02.697216 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" event={"ID":"d59669d1-df50-4b30-b5e5-91cfca97c27e","Type":"ContainerStarted","Data":"468096b6a7407820115bca316d2ab6ff55a1d0ab0f993fd374ec4893912a18db"} Feb 16 11:55:02 crc kubenswrapper[4797]: I0216 11:55:02.725791 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" podStartSLOduration=1.211793432 podStartE2EDuration="12.725774239s" podCreationTimestamp="2026-02-16 11:54:50 +0000 UTC" firstStartedPulling="2026-02-16 11:54:50.812825133 +0000 UTC m=+2885.533010113" lastFinishedPulling="2026-02-16 11:55:02.32680594 +0000 UTC m=+2897.046990920" observedRunningTime="2026-02-16 11:55:02.714824602 +0000 UTC m=+2897.435009572" watchObservedRunningTime="2026-02-16 11:55:02.725774239 +0000 UTC m=+2897.445959239" Feb 16 11:55:02 crc kubenswrapper[4797]: I0216 11:55:02.773109 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ndl9"] Feb 16 11:55:02 crc kubenswrapper[4797]: W0216 11:55:02.777083 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd30b49e1_2d08_4ce5_b22f_949580b8951b.slice/crio-1c3c610e06edfa74f10bac0239c7cf81ea56206d82bd549568e0eddc999f75ba WatchSource:0}: Error finding container 1c3c610e06edfa74f10bac0239c7cf81ea56206d82bd549568e0eddc999f75ba: Status 404 returned error can't find the container with id 1c3c610e06edfa74f10bac0239c7cf81ea56206d82bd549568e0eddc999f75ba Feb 16 11:55:03 crc kubenswrapper[4797]: I0216 11:55:03.706330 4797 generic.go:334] "Generic (PLEG): container finished" podID="d30b49e1-2d08-4ce5-b22f-949580b8951b" containerID="621d69eae9b62bc0a23cfefcf015bc39c3d3e81779ff5fe36d201657206adbb3" exitCode=0 Feb 16 11:55:03 crc kubenswrapper[4797]: I0216 11:55:03.706454 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ndl9" event={"ID":"d30b49e1-2d08-4ce5-b22f-949580b8951b","Type":"ContainerDied","Data":"621d69eae9b62bc0a23cfefcf015bc39c3d3e81779ff5fe36d201657206adbb3"} Feb 16 11:55:03 crc kubenswrapper[4797]: I0216 11:55:03.706630 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ndl9" event={"ID":"d30b49e1-2d08-4ce5-b22f-949580b8951b","Type":"ContainerStarted","Data":"1c3c610e06edfa74f10bac0239c7cf81ea56206d82bd549568e0eddc999f75ba"} Feb 16 11:55:04 crc kubenswrapper[4797]: I0216 11:55:04.718198 4797 generic.go:334] "Generic (PLEG): container finished" podID="d30b49e1-2d08-4ce5-b22f-949580b8951b" containerID="7cae3ded1441b947c7fee9e5741936b0bbf7893dd228d47727c689506dfa1463" exitCode=0 Feb 16 11:55:04 crc kubenswrapper[4797]: I0216 11:55:04.718768 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ndl9" event={"ID":"d30b49e1-2d08-4ce5-b22f-949580b8951b","Type":"ContainerDied","Data":"7cae3ded1441b947c7fee9e5741936b0bbf7893dd228d47727c689506dfa1463"} Feb 16 11:55:05 crc kubenswrapper[4797]: I0216 11:55:05.731277 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ndl9" event={"ID":"d30b49e1-2d08-4ce5-b22f-949580b8951b","Type":"ContainerStarted","Data":"1640784ba2c016a4bb5b96c9ebb42242054e99e2dad7b1159c18af02271e9ad6"} Feb 16 11:55:05 crc kubenswrapper[4797]: I0216 11:55:05.754803 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5ndl9" podStartSLOduration=7.326251803 podStartE2EDuration="8.754780116s" podCreationTimestamp="2026-02-16 11:54:57 +0000 UTC" firstStartedPulling="2026-02-16 11:55:03.707974313 +0000 UTC m=+2898.428159293" lastFinishedPulling="2026-02-16 11:55:05.136502616 +0000 UTC m=+2899.856687606" observedRunningTime="2026-02-16 11:55:05.748120014 +0000 UTC m=+2900.468304994" watchObservedRunningTime="2026-02-16 11:55:05.754780116 +0000 UTC m=+2900.474965096" Feb 16 11:55:07 crc kubenswrapper[4797]: I0216 11:55:07.537635 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:55:07 crc kubenswrapper[4797]: I0216 11:55:07.537874 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:55:07 crc kubenswrapper[4797]: I0216 11:55:07.592489 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:55:13 crc kubenswrapper[4797]: E0216 11:55:13.987333 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:55:17 crc kubenswrapper[4797]: I0216 11:55:17.595253 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:55:17 crc kubenswrapper[4797]: I0216 11:55:17.653243 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ndl9"] Feb 16 11:55:17 crc kubenswrapper[4797]: I0216 11:55:17.847556 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5ndl9" podUID="d30b49e1-2d08-4ce5-b22f-949580b8951b" containerName="registry-server" containerID="cri-o://1640784ba2c016a4bb5b96c9ebb42242054e99e2dad7b1159c18af02271e9ad6" gracePeriod=2 Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.363002 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.463758 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30b49e1-2d08-4ce5-b22f-949580b8951b-utilities\") pod \"d30b49e1-2d08-4ce5-b22f-949580b8951b\" (UID: \"d30b49e1-2d08-4ce5-b22f-949580b8951b\") " Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.463897 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq8rf\" (UniqueName: \"kubernetes.io/projected/d30b49e1-2d08-4ce5-b22f-949580b8951b-kube-api-access-qq8rf\") pod \"d30b49e1-2d08-4ce5-b22f-949580b8951b\" (UID: \"d30b49e1-2d08-4ce5-b22f-949580b8951b\") " Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.464139 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30b49e1-2d08-4ce5-b22f-949580b8951b-catalog-content\") pod \"d30b49e1-2d08-4ce5-b22f-949580b8951b\" (UID: \"d30b49e1-2d08-4ce5-b22f-949580b8951b\") " Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.464618 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d30b49e1-2d08-4ce5-b22f-949580b8951b-utilities" (OuterVolumeSpecName: "utilities") pod "d30b49e1-2d08-4ce5-b22f-949580b8951b" (UID: "d30b49e1-2d08-4ce5-b22f-949580b8951b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.465052 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d30b49e1-2d08-4ce5-b22f-949580b8951b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.476826 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d30b49e1-2d08-4ce5-b22f-949580b8951b-kube-api-access-qq8rf" (OuterVolumeSpecName: "kube-api-access-qq8rf") pod "d30b49e1-2d08-4ce5-b22f-949580b8951b" (UID: "d30b49e1-2d08-4ce5-b22f-949580b8951b"). InnerVolumeSpecName "kube-api-access-qq8rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.485922 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d30b49e1-2d08-4ce5-b22f-949580b8951b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d30b49e1-2d08-4ce5-b22f-949580b8951b" (UID: "d30b49e1-2d08-4ce5-b22f-949580b8951b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.566937 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d30b49e1-2d08-4ce5-b22f-949580b8951b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.566968 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq8rf\" (UniqueName: \"kubernetes.io/projected/d30b49e1-2d08-4ce5-b22f-949580b8951b-kube-api-access-qq8rf\") on node \"crc\" DevicePath \"\"" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.860222 4797 generic.go:334] "Generic (PLEG): container finished" podID="d30b49e1-2d08-4ce5-b22f-949580b8951b" containerID="1640784ba2c016a4bb5b96c9ebb42242054e99e2dad7b1159c18af02271e9ad6" exitCode=0 Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.860292 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5ndl9" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.860325 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ndl9" event={"ID":"d30b49e1-2d08-4ce5-b22f-949580b8951b","Type":"ContainerDied","Data":"1640784ba2c016a4bb5b96c9ebb42242054e99e2dad7b1159c18af02271e9ad6"} Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.860397 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5ndl9" event={"ID":"d30b49e1-2d08-4ce5-b22f-949580b8951b","Type":"ContainerDied","Data":"1c3c610e06edfa74f10bac0239c7cf81ea56206d82bd549568e0eddc999f75ba"} Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.860421 4797 scope.go:117] "RemoveContainer" containerID="1640784ba2c016a4bb5b96c9ebb42242054e99e2dad7b1159c18af02271e9ad6" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.872810 4797 generic.go:334] "Generic (PLEG): container finished" podID="d59669d1-df50-4b30-b5e5-91cfca97c27e" containerID="468096b6a7407820115bca316d2ab6ff55a1d0ab0f993fd374ec4893912a18db" exitCode=0 Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.872863 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" event={"ID":"d59669d1-df50-4b30-b5e5-91cfca97c27e","Type":"ContainerDied","Data":"468096b6a7407820115bca316d2ab6ff55a1d0ab0f993fd374ec4893912a18db"} Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.906354 4797 scope.go:117] "RemoveContainer" containerID="7cae3ded1441b947c7fee9e5741936b0bbf7893dd228d47727c689506dfa1463" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.914491 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ndl9"] Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.921859 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5ndl9"] Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.933878 4797 scope.go:117] "RemoveContainer" containerID="621d69eae9b62bc0a23cfefcf015bc39c3d3e81779ff5fe36d201657206adbb3" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.991387 4797 scope.go:117] "RemoveContainer" containerID="1640784ba2c016a4bb5b96c9ebb42242054e99e2dad7b1159c18af02271e9ad6" Feb 16 11:55:18 crc kubenswrapper[4797]: E0216 11:55:18.991919 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1640784ba2c016a4bb5b96c9ebb42242054e99e2dad7b1159c18af02271e9ad6\": container with ID starting with 1640784ba2c016a4bb5b96c9ebb42242054e99e2dad7b1159c18af02271e9ad6 not found: ID does not exist" containerID="1640784ba2c016a4bb5b96c9ebb42242054e99e2dad7b1159c18af02271e9ad6" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.991970 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1640784ba2c016a4bb5b96c9ebb42242054e99e2dad7b1159c18af02271e9ad6"} err="failed to get container status \"1640784ba2c016a4bb5b96c9ebb42242054e99e2dad7b1159c18af02271e9ad6\": rpc error: code = NotFound desc = could not find container \"1640784ba2c016a4bb5b96c9ebb42242054e99e2dad7b1159c18af02271e9ad6\": container with ID starting with 1640784ba2c016a4bb5b96c9ebb42242054e99e2dad7b1159c18af02271e9ad6 not found: ID does not exist" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.991995 4797 scope.go:117] "RemoveContainer" containerID="7cae3ded1441b947c7fee9e5741936b0bbf7893dd228d47727c689506dfa1463" Feb 16 11:55:18 crc kubenswrapper[4797]: E0216 11:55:18.992356 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cae3ded1441b947c7fee9e5741936b0bbf7893dd228d47727c689506dfa1463\": container with ID starting with 7cae3ded1441b947c7fee9e5741936b0bbf7893dd228d47727c689506dfa1463 not found: ID does not exist" containerID="7cae3ded1441b947c7fee9e5741936b0bbf7893dd228d47727c689506dfa1463" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.992389 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cae3ded1441b947c7fee9e5741936b0bbf7893dd228d47727c689506dfa1463"} err="failed to get container status \"7cae3ded1441b947c7fee9e5741936b0bbf7893dd228d47727c689506dfa1463\": rpc error: code = NotFound desc = could not find container \"7cae3ded1441b947c7fee9e5741936b0bbf7893dd228d47727c689506dfa1463\": container with ID starting with 7cae3ded1441b947c7fee9e5741936b0bbf7893dd228d47727c689506dfa1463 not found: ID does not exist" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.992407 4797 scope.go:117] "RemoveContainer" containerID="621d69eae9b62bc0a23cfefcf015bc39c3d3e81779ff5fe36d201657206adbb3" Feb 16 11:55:18 crc kubenswrapper[4797]: E0216 11:55:18.992716 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"621d69eae9b62bc0a23cfefcf015bc39c3d3e81779ff5fe36d201657206adbb3\": container with ID starting with 621d69eae9b62bc0a23cfefcf015bc39c3d3e81779ff5fe36d201657206adbb3 not found: ID does not exist" containerID="621d69eae9b62bc0a23cfefcf015bc39c3d3e81779ff5fe36d201657206adbb3" Feb 16 11:55:18 crc kubenswrapper[4797]: I0216 11:55:18.992761 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"621d69eae9b62bc0a23cfefcf015bc39c3d3e81779ff5fe36d201657206adbb3"} err="failed to get container status \"621d69eae9b62bc0a23cfefcf015bc39c3d3e81779ff5fe36d201657206adbb3\": rpc error: code = NotFound desc = could not find container \"621d69eae9b62bc0a23cfefcf015bc39c3d3e81779ff5fe36d201657206adbb3\": container with ID starting with 621d69eae9b62bc0a23cfefcf015bc39c3d3e81779ff5fe36d201657206adbb3 not found: ID does not exist" Feb 16 11:55:19 crc kubenswrapper[4797]: I0216 11:55:19.990841 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" Feb 16 11:55:19 crc kubenswrapper[4797]: I0216 11:55:19.995750 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d30b49e1-2d08-4ce5-b22f-949580b8951b" path="/var/lib/kubelet/pods/d30b49e1-2d08-4ce5-b22f-949580b8951b/volumes" Feb 16 11:55:20 crc kubenswrapper[4797]: I0216 11:55:20.022293 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mf9vn/crc-debug-gxf2f"] Feb 16 11:55:20 crc kubenswrapper[4797]: I0216 11:55:20.030441 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mf9vn/crc-debug-gxf2f"] Feb 16 11:55:20 crc kubenswrapper[4797]: I0216 11:55:20.096277 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d59669d1-df50-4b30-b5e5-91cfca97c27e-host\") pod \"d59669d1-df50-4b30-b5e5-91cfca97c27e\" (UID: \"d59669d1-df50-4b30-b5e5-91cfca97c27e\") " Feb 16 11:55:20 crc kubenswrapper[4797]: I0216 11:55:20.096412 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d59669d1-df50-4b30-b5e5-91cfca97c27e-host" (OuterVolumeSpecName: "host") pod "d59669d1-df50-4b30-b5e5-91cfca97c27e" (UID: "d59669d1-df50-4b30-b5e5-91cfca97c27e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:55:20 crc kubenswrapper[4797]: I0216 11:55:20.096618 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2wwk\" (UniqueName: \"kubernetes.io/projected/d59669d1-df50-4b30-b5e5-91cfca97c27e-kube-api-access-l2wwk\") pod \"d59669d1-df50-4b30-b5e5-91cfca97c27e\" (UID: \"d59669d1-df50-4b30-b5e5-91cfca97c27e\") " Feb 16 11:55:20 crc kubenswrapper[4797]: I0216 11:55:20.097187 4797 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d59669d1-df50-4b30-b5e5-91cfca97c27e-host\") on node \"crc\" DevicePath \"\"" Feb 16 11:55:20 crc kubenswrapper[4797]: I0216 11:55:20.102275 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d59669d1-df50-4b30-b5e5-91cfca97c27e-kube-api-access-l2wwk" (OuterVolumeSpecName: "kube-api-access-l2wwk") pod "d59669d1-df50-4b30-b5e5-91cfca97c27e" (UID: "d59669d1-df50-4b30-b5e5-91cfca97c27e"). InnerVolumeSpecName "kube-api-access-l2wwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:55:20 crc kubenswrapper[4797]: I0216 11:55:20.199419 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2wwk\" (UniqueName: \"kubernetes.io/projected/d59669d1-df50-4b30-b5e5-91cfca97c27e-kube-api-access-l2wwk\") on node \"crc\" DevicePath \"\"" Feb 16 11:55:20 crc kubenswrapper[4797]: I0216 11:55:20.891594 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ef51e1ede2ccb4605b1af6305be6e595d46679a0c96cc755db31e37dc6e1992" Feb 16 11:55:20 crc kubenswrapper[4797]: I0216 11:55:20.891643 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mf9vn/crc-debug-gxf2f" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.384044 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-mf9vn/crc-debug-nfnjz"] Feb 16 11:55:21 crc kubenswrapper[4797]: E0216 11:55:21.384778 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59669d1-df50-4b30-b5e5-91cfca97c27e" containerName="container-00" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.384793 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59669d1-df50-4b30-b5e5-91cfca97c27e" containerName="container-00" Feb 16 11:55:21 crc kubenswrapper[4797]: E0216 11:55:21.384814 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30b49e1-2d08-4ce5-b22f-949580b8951b" containerName="registry-server" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.384821 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30b49e1-2d08-4ce5-b22f-949580b8951b" containerName="registry-server" Feb 16 11:55:21 crc kubenswrapper[4797]: E0216 11:55:21.384840 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30b49e1-2d08-4ce5-b22f-949580b8951b" containerName="extract-utilities" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.384846 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30b49e1-2d08-4ce5-b22f-949580b8951b" containerName="extract-utilities" Feb 16 11:55:21 crc kubenswrapper[4797]: E0216 11:55:21.384868 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d30b49e1-2d08-4ce5-b22f-949580b8951b" containerName="extract-content" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.384875 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="d30b49e1-2d08-4ce5-b22f-949580b8951b" containerName="extract-content" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.385159 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59669d1-df50-4b30-b5e5-91cfca97c27e" containerName="container-00" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.385187 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="d30b49e1-2d08-4ce5-b22f-949580b8951b" containerName="registry-server" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.386022 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mf9vn/crc-debug-nfnjz" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.388430 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-mf9vn"/"default-dockercfg-w7bn5" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.529877 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9585x\" (UniqueName: \"kubernetes.io/projected/dba50bfe-9eff-4dc9-971d-aa16d5ebe85d-kube-api-access-9585x\") pod \"crc-debug-nfnjz\" (UID: \"dba50bfe-9eff-4dc9-971d-aa16d5ebe85d\") " pod="openshift-must-gather-mf9vn/crc-debug-nfnjz" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.530303 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dba50bfe-9eff-4dc9-971d-aa16d5ebe85d-host\") pod \"crc-debug-nfnjz\" (UID: \"dba50bfe-9eff-4dc9-971d-aa16d5ebe85d\") " pod="openshift-must-gather-mf9vn/crc-debug-nfnjz" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.632442 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dba50bfe-9eff-4dc9-971d-aa16d5ebe85d-host\") pod \"crc-debug-nfnjz\" (UID: \"dba50bfe-9eff-4dc9-971d-aa16d5ebe85d\") " pod="openshift-must-gather-mf9vn/crc-debug-nfnjz" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.632637 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dba50bfe-9eff-4dc9-971d-aa16d5ebe85d-host\") pod \"crc-debug-nfnjz\" (UID: \"dba50bfe-9eff-4dc9-971d-aa16d5ebe85d\") " pod="openshift-must-gather-mf9vn/crc-debug-nfnjz" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.632663 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9585x\" (UniqueName: \"kubernetes.io/projected/dba50bfe-9eff-4dc9-971d-aa16d5ebe85d-kube-api-access-9585x\") pod \"crc-debug-nfnjz\" (UID: \"dba50bfe-9eff-4dc9-971d-aa16d5ebe85d\") " pod="openshift-must-gather-mf9vn/crc-debug-nfnjz" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.654237 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9585x\" (UniqueName: \"kubernetes.io/projected/dba50bfe-9eff-4dc9-971d-aa16d5ebe85d-kube-api-access-9585x\") pod \"crc-debug-nfnjz\" (UID: \"dba50bfe-9eff-4dc9-971d-aa16d5ebe85d\") " pod="openshift-must-gather-mf9vn/crc-debug-nfnjz" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.703501 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mf9vn/crc-debug-nfnjz" Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.900920 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mf9vn/crc-debug-nfnjz" event={"ID":"dba50bfe-9eff-4dc9-971d-aa16d5ebe85d","Type":"ContainerStarted","Data":"a00c69120285038e8ad85f40ec74f112dc0a2a97baf1ff0ec6e4b3588c7a5b7c"} Feb 16 11:55:21 crc kubenswrapper[4797]: I0216 11:55:21.993824 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d59669d1-df50-4b30-b5e5-91cfca97c27e" path="/var/lib/kubelet/pods/d59669d1-df50-4b30-b5e5-91cfca97c27e/volumes" Feb 16 11:55:22 crc kubenswrapper[4797]: I0216 11:55:22.912224 4797 generic.go:334] "Generic (PLEG): container finished" podID="dba50bfe-9eff-4dc9-971d-aa16d5ebe85d" containerID="c59e7e3539b4c01caac823397e6cf4ce3b36b0d184837530cecd732bf8fc015b" exitCode=1 Feb 16 11:55:22 crc kubenswrapper[4797]: I0216 11:55:22.913695 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mf9vn/crc-debug-nfnjz" event={"ID":"dba50bfe-9eff-4dc9-971d-aa16d5ebe85d","Type":"ContainerDied","Data":"c59e7e3539b4c01caac823397e6cf4ce3b36b0d184837530cecd732bf8fc015b"} Feb 16 11:55:22 crc kubenswrapper[4797]: I0216 11:55:22.957476 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mf9vn/crc-debug-nfnjz"] Feb 16 11:55:22 crc kubenswrapper[4797]: I0216 11:55:22.968227 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mf9vn/crc-debug-nfnjz"] Feb 16 11:55:24 crc kubenswrapper[4797]: I0216 11:55:24.031092 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mf9vn/crc-debug-nfnjz" Feb 16 11:55:24 crc kubenswrapper[4797]: I0216 11:55:24.191433 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dba50bfe-9eff-4dc9-971d-aa16d5ebe85d-host\") pod \"dba50bfe-9eff-4dc9-971d-aa16d5ebe85d\" (UID: \"dba50bfe-9eff-4dc9-971d-aa16d5ebe85d\") " Feb 16 11:55:24 crc kubenswrapper[4797]: I0216 11:55:24.191557 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dba50bfe-9eff-4dc9-971d-aa16d5ebe85d-host" (OuterVolumeSpecName: "host") pod "dba50bfe-9eff-4dc9-971d-aa16d5ebe85d" (UID: "dba50bfe-9eff-4dc9-971d-aa16d5ebe85d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 11:55:24 crc kubenswrapper[4797]: I0216 11:55:24.191649 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9585x\" (UniqueName: \"kubernetes.io/projected/dba50bfe-9eff-4dc9-971d-aa16d5ebe85d-kube-api-access-9585x\") pod \"dba50bfe-9eff-4dc9-971d-aa16d5ebe85d\" (UID: \"dba50bfe-9eff-4dc9-971d-aa16d5ebe85d\") " Feb 16 11:55:24 crc kubenswrapper[4797]: I0216 11:55:24.192326 4797 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dba50bfe-9eff-4dc9-971d-aa16d5ebe85d-host\") on node \"crc\" DevicePath \"\"" Feb 16 11:55:24 crc kubenswrapper[4797]: I0216 11:55:24.205950 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dba50bfe-9eff-4dc9-971d-aa16d5ebe85d-kube-api-access-9585x" (OuterVolumeSpecName: "kube-api-access-9585x") pod "dba50bfe-9eff-4dc9-971d-aa16d5ebe85d" (UID: "dba50bfe-9eff-4dc9-971d-aa16d5ebe85d"). InnerVolumeSpecName "kube-api-access-9585x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 11:55:24 crc kubenswrapper[4797]: I0216 11:55:24.294484 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9585x\" (UniqueName: \"kubernetes.io/projected/dba50bfe-9eff-4dc9-971d-aa16d5ebe85d-kube-api-access-9585x\") on node \"crc\" DevicePath \"\"" Feb 16 11:55:24 crc kubenswrapper[4797]: I0216 11:55:24.933419 4797 scope.go:117] "RemoveContainer" containerID="c59e7e3539b4c01caac823397e6cf4ce3b36b0d184837530cecd732bf8fc015b" Feb 16 11:55:24 crc kubenswrapper[4797]: I0216 11:55:24.933431 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mf9vn/crc-debug-nfnjz" Feb 16 11:55:24 crc kubenswrapper[4797]: E0216 11:55:24.988297 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:55:25 crc kubenswrapper[4797]: I0216 11:55:25.992945 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dba50bfe-9eff-4dc9-971d-aa16d5ebe85d" path="/var/lib/kubelet/pods/dba50bfe-9eff-4dc9-971d-aa16d5ebe85d/volumes" Feb 16 11:55:37 crc kubenswrapper[4797]: E0216 11:55:37.991641 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:55:50 crc kubenswrapper[4797]: E0216 11:55:50.986150 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:56:03 crc kubenswrapper[4797]: E0216 11:56:03.984899 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:56:12 crc kubenswrapper[4797]: I0216 11:56:12.327363 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_ad8679cc-1167-4feb-a53a-49bded099628/init-config-reloader/0.log" Feb 16 11:56:12 crc kubenswrapper[4797]: I0216 11:56:12.485500 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_ad8679cc-1167-4feb-a53a-49bded099628/init-config-reloader/0.log" Feb 16 11:56:12 crc kubenswrapper[4797]: I0216 11:56:12.507309 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_ad8679cc-1167-4feb-a53a-49bded099628/config-reloader/0.log" Feb 16 11:56:12 crc kubenswrapper[4797]: I0216 11:56:12.550019 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_ad8679cc-1167-4feb-a53a-49bded099628/alertmanager/0.log" Feb 16 11:56:12 crc kubenswrapper[4797]: I0216 11:56:12.678617 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d9f6f7d6-62qcx_a00aa91c-5090-4635-b93f-531cc33523b9/barbican-api/0.log" Feb 16 11:56:12 crc kubenswrapper[4797]: I0216 11:56:12.710232 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7d9f6f7d6-62qcx_a00aa91c-5090-4635-b93f-531cc33523b9/barbican-api-log/0.log" Feb 16 11:56:12 crc kubenswrapper[4797]: I0216 11:56:12.831898 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-cdc59674-z5klt_0ec6277d-293d-47f4-8dc0-d407a4d1bfc8/barbican-keystone-listener/0.log" Feb 16 11:56:12 crc kubenswrapper[4797]: I0216 11:56:12.900789 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-cdc59674-z5klt_0ec6277d-293d-47f4-8dc0-d407a4d1bfc8/barbican-keystone-listener-log/0.log" Feb 16 11:56:12 crc kubenswrapper[4797]: I0216 11:56:12.976030 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5cd6bd5769-dzjd4_a099a104-659d-41b1-a775-201ce4979384/barbican-worker/0.log" Feb 16 11:56:13 crc kubenswrapper[4797]: I0216 11:56:13.012371 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5cd6bd5769-dzjd4_a099a104-659d-41b1-a775-201ce4979384/barbican-worker-log/0.log" Feb 16 11:56:13 crc kubenswrapper[4797]: I0216 11:56:13.137777 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_25bc0b36-a550-45a1-9632-088bfd0b2249/ceilometer-central-agent/0.log" Feb 16 11:56:13 crc kubenswrapper[4797]: I0216 11:56:13.180938 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_25bc0b36-a550-45a1-9632-088bfd0b2249/ceilometer-notification-agent/0.log" Feb 16 11:56:13 crc kubenswrapper[4797]: I0216 11:56:13.198519 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_25bc0b36-a550-45a1-9632-088bfd0b2249/proxy-httpd/0.log" Feb 16 11:56:13 crc kubenswrapper[4797]: I0216 11:56:13.314458 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_25bc0b36-a550-45a1-9632-088bfd0b2249/sg-core/0.log" Feb 16 11:56:13 crc kubenswrapper[4797]: I0216 11:56:13.422134 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_794ba2c1-f4d2-4580-a072-5e3089d0cd4a/cinder-api-log/0.log" Feb 16 11:56:13 crc kubenswrapper[4797]: I0216 11:56:13.447405 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_794ba2c1-f4d2-4580-a072-5e3089d0cd4a/cinder-api/0.log" Feb 16 11:56:13 crc kubenswrapper[4797]: I0216 11:56:13.630541 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6e588df8-cba6-4258-bead-5c3523b99023/cinder-scheduler/0.log" Feb 16 11:56:13 crc kubenswrapper[4797]: I0216 11:56:13.658691 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6e588df8-cba6-4258-bead-5c3523b99023/probe/0.log" Feb 16 11:56:13 crc kubenswrapper[4797]: I0216 11:56:13.829723 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_ddc54c65-b3e8-4bb2-a16a-81a2297b5222/loki-compactor/0.log" Feb 16 11:56:13 crc kubenswrapper[4797]: I0216 11:56:13.946833 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-cnfpr_8f51ac14-22e0-4e95-901e-02cbad7ce1fe/loki-distributor/0.log" Feb 16 11:56:14 crc kubenswrapper[4797]: I0216 11:56:14.025772 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-7kt7j_41e46e5d-912d-4425-baea-f40c0435997b/gateway/0.log" Feb 16 11:56:14 crc kubenswrapper[4797]: I0216 11:56:14.128821 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-s2jlb_d934cad8-4584-4bf1-992c-37a3751d682e/gateway/0.log" Feb 16 11:56:14 crc kubenswrapper[4797]: I0216 11:56:14.309329 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_8d41ad10-514c-46f6-991f-1d4599322401/loki-index-gateway/0.log" Feb 16 11:56:14 crc kubenswrapper[4797]: I0216 11:56:14.388433 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_4ab6c5d9-8717-4b1b-8d13-6eb03e52a080/loki-ingester/0.log" Feb 16 11:56:14 crc kubenswrapper[4797]: I0216 11:56:14.524508 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-vzsll_0d56c15d-4b5f-4eac-9a66-760bf878522b/loki-querier/0.log" Feb 16 11:56:14 crc kubenswrapper[4797]: I0216 11:56:14.550759 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-s2526_9f1d610c-b137-408a-9cd1-08f01ea36a6a/loki-query-frontend/0.log" Feb 16 11:56:14 crc kubenswrapper[4797]: I0216 11:56:14.731545 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-h2nsb_075067a8-6831-4f7d-ad0b-7ee700dc165e/init/0.log" Feb 16 11:56:14 crc kubenswrapper[4797]: I0216 11:56:14.897834 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-h2nsb_075067a8-6831-4f7d-ad0b-7ee700dc165e/init/0.log" Feb 16 11:56:14 crc kubenswrapper[4797]: I0216 11:56:14.900651 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-h2nsb_075067a8-6831-4f7d-ad0b-7ee700dc165e/dnsmasq-dns/0.log" Feb 16 11:56:14 crc kubenswrapper[4797]: I0216 11:56:14.936810 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e651df34-b345-442a-aa48-2f3a52a8df2b/glance-httpd/0.log" Feb 16 11:56:14 crc kubenswrapper[4797]: E0216 11:56:14.986294 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:56:15 crc kubenswrapper[4797]: I0216 11:56:15.124232 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e651df34-b345-442a-aa48-2f3a52a8df2b/glance-log/0.log" Feb 16 11:56:15 crc kubenswrapper[4797]: I0216 11:56:15.169619 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d308d940-ff49-426f-abfb-50203189d565/glance-httpd/0.log" Feb 16 11:56:15 crc kubenswrapper[4797]: I0216 11:56:15.187523 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d308d940-ff49-426f-abfb-50203189d565/glance-log/0.log" Feb 16 11:56:15 crc kubenswrapper[4797]: I0216 11:56:15.472131 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-99c64f77c-dxwz8_94fb19b8-1690-4768-97cd-e918e0f54862/keystone-api/0.log" Feb 16 11:56:15 crc kubenswrapper[4797]: I0216 11:56:15.494893 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_23e53487-a14d-4b7b-8e1c-66c20d76309d/kube-state-metrics/0.log" Feb 16 11:56:15 crc kubenswrapper[4797]: I0216 11:56:15.757811 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6fd696486f-x6hfl_247490ab-e07e-4491-854a-1adda964c68a/neutron-api/0.log" Feb 16 11:56:15 crc kubenswrapper[4797]: I0216 11:56:15.929337 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6fd696486f-x6hfl_247490ab-e07e-4491-854a-1adda964c68a/neutron-httpd/0.log" Feb 16 11:56:16 crc kubenswrapper[4797]: I0216 11:56:16.250401 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a/nova-api-log/0.log" Feb 16 11:56:16 crc kubenswrapper[4797]: I0216 11:56:16.307012 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f9ab05ef-ab44-4a5d-bf44-f2e1e2c6699a/nova-api-api/0.log" Feb 16 11:56:16 crc kubenswrapper[4797]: I0216 11:56:16.385543 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_78ff8619-712d-4c81-bda1-db0af8c708aa/nova-cell0-conductor-conductor/0.log" Feb 16 11:56:16 crc kubenswrapper[4797]: I0216 11:56:16.618540 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_739194a8-bb4c-411b-ac3b-bb08c86be5f6/nova-cell1-conductor-conductor/0.log" Feb 16 11:56:16 crc kubenswrapper[4797]: I0216 11:56:16.721937 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_b260ffc6-7065-4f58-8e23-f5b5367123c6/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 11:56:16 crc kubenswrapper[4797]: I0216 11:56:16.910510 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_520dbe8b-c811-47c8-9e86-7bb3d5cc7580/nova-metadata-log/0.log" Feb 16 11:56:17 crc kubenswrapper[4797]: I0216 11:56:17.080509 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_560a6700-80d9-4db4-9fef-425ac7981273/nova-scheduler-scheduler/0.log" Feb 16 11:56:17 crc kubenswrapper[4797]: I0216 11:56:17.241048 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4acd6dc5-d9e3-4a05-aed4-ecc80733f365/mysql-bootstrap/0.log" Feb 16 11:56:17 crc kubenswrapper[4797]: I0216 11:56:17.461218 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4acd6dc5-d9e3-4a05-aed4-ecc80733f365/galera/0.log" Feb 16 11:56:17 crc kubenswrapper[4797]: I0216 11:56:17.465073 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4acd6dc5-d9e3-4a05-aed4-ecc80733f365/mysql-bootstrap/0.log" Feb 16 11:56:17 crc kubenswrapper[4797]: I0216 11:56:17.667898 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_08b607dd-023c-4050-87d5-58f8f7f1714a/mysql-bootstrap/0.log" Feb 16 11:56:17 crc kubenswrapper[4797]: I0216 11:56:17.770894 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_520dbe8b-c811-47c8-9e86-7bb3d5cc7580/nova-metadata-metadata/0.log" Feb 16 11:56:17 crc kubenswrapper[4797]: I0216 11:56:17.828064 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_08b607dd-023c-4050-87d5-58f8f7f1714a/mysql-bootstrap/0.log" Feb 16 11:56:17 crc kubenswrapper[4797]: I0216 11:56:17.878229 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_08b607dd-023c-4050-87d5-58f8f7f1714a/galera/0.log" Feb 16 11:56:18 crc kubenswrapper[4797]: I0216 11:56:18.012295 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_33c9ce82-2d91-49fd-935c-18996e6ecc18/openstackclient/0.log" Feb 16 11:56:18 crc kubenswrapper[4797]: I0216 11:56:18.060973 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-dht7z_3114c460-eb74-48a9-bf0c-d32fe63a71be/ovn-controller/0.log" Feb 16 11:56:18 crc kubenswrapper[4797]: I0216 11:56:18.284450 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zgw2f_d4cd0f86-ee13-4721-b2fe-091b428a14bd/ovsdb-server-init/0.log" Feb 16 11:56:18 crc kubenswrapper[4797]: I0216 11:56:18.313767 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-9xdrm_c89c74dc-5e73-48fb-9885-281d013b1e0f/openstack-network-exporter/0.log" Feb 16 11:56:18 crc kubenswrapper[4797]: I0216 11:56:18.445547 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zgw2f_d4cd0f86-ee13-4721-b2fe-091b428a14bd/ovsdb-server-init/0.log" Feb 16 11:56:18 crc kubenswrapper[4797]: I0216 11:56:18.514760 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zgw2f_d4cd0f86-ee13-4721-b2fe-091b428a14bd/ovsdb-server/0.log" Feb 16 11:56:18 crc kubenswrapper[4797]: I0216 11:56:18.574566 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-zgw2f_d4cd0f86-ee13-4721-b2fe-091b428a14bd/ovs-vswitchd/0.log" Feb 16 11:56:18 crc kubenswrapper[4797]: I0216 11:56:18.649502 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_595e46ad-0edd-4cc1-b56d-e4aa4a1f1772/openstack-network-exporter/0.log" Feb 16 11:56:18 crc kubenswrapper[4797]: I0216 11:56:18.701156 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_595e46ad-0edd-4cc1-b56d-e4aa4a1f1772/ovn-northd/0.log" Feb 16 11:56:18 crc kubenswrapper[4797]: I0216 11:56:18.804438 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a782b16e-c29b-4d0c-ae20-23e2822d8e02/openstack-network-exporter/0.log" Feb 16 11:56:18 crc kubenswrapper[4797]: I0216 11:56:18.892849 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a782b16e-c29b-4d0c-ae20-23e2822d8e02/ovsdbserver-nb/0.log" Feb 16 11:56:19 crc kubenswrapper[4797]: I0216 11:56:19.004276 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8e52214d-a751-4e7f-913e-064677d2fe1f/openstack-network-exporter/0.log" Feb 16 11:56:19 crc kubenswrapper[4797]: I0216 11:56:19.027180 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_8e52214d-a751-4e7f-913e-064677d2fe1f/ovsdbserver-sb/0.log" Feb 16 11:56:19 crc kubenswrapper[4797]: I0216 11:56:19.161717 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7fb7cc766-tfhd7_0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8/placement-api/0.log" Feb 16 11:56:19 crc kubenswrapper[4797]: I0216 11:56:19.215662 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7fb7cc766-tfhd7_0e8b9f5e-8e8c-4315-8cf6-70eeb304deb8/placement-log/0.log" Feb 16 11:56:19 crc kubenswrapper[4797]: I0216 11:56:19.501989 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_76a621c6-7221-46cd-8385-2c733893ccd0/init-config-reloader/0.log" Feb 16 11:56:19 crc kubenswrapper[4797]: I0216 11:56:19.711328 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_76a621c6-7221-46cd-8385-2c733893ccd0/init-config-reloader/0.log" Feb 16 11:56:19 crc kubenswrapper[4797]: I0216 11:56:19.772352 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_76a621c6-7221-46cd-8385-2c733893ccd0/thanos-sidecar/0.log" Feb 16 11:56:19 crc kubenswrapper[4797]: I0216 11:56:19.793623 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_76a621c6-7221-46cd-8385-2c733893ccd0/config-reloader/0.log" Feb 16 11:56:19 crc kubenswrapper[4797]: I0216 11:56:19.801075 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_76a621c6-7221-46cd-8385-2c733893ccd0/prometheus/0.log" Feb 16 11:56:19 crc kubenswrapper[4797]: I0216 11:56:19.957475 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1aa87d44-dc52-4398-a8f5-0adf7d33966e/setup-container/0.log" Feb 16 11:56:20 crc kubenswrapper[4797]: I0216 11:56:20.249006 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1aa87d44-dc52-4398-a8f5-0adf7d33966e/setup-container/0.log" Feb 16 11:56:20 crc kubenswrapper[4797]: I0216 11:56:20.254568 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_1aa87d44-dc52-4398-a8f5-0adf7d33966e/rabbitmq/0.log" Feb 16 11:56:20 crc kubenswrapper[4797]: I0216 11:56:20.278440 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_40b82cbf-8ce3-45e9-a87e-a96cbe83488c/setup-container/0.log" Feb 16 11:56:20 crc kubenswrapper[4797]: I0216 11:56:20.450189 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_40b82cbf-8ce3-45e9-a87e-a96cbe83488c/setup-container/0.log" Feb 16 11:56:20 crc kubenswrapper[4797]: I0216 11:56:20.517894 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_40b82cbf-8ce3-45e9-a87e-a96cbe83488c/rabbitmq/0.log" Feb 16 11:56:20 crc kubenswrapper[4797]: I0216 11:56:20.590427 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-64d4c9c779-ctrqz_8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd/proxy-httpd/0.log" Feb 16 11:56:20 crc kubenswrapper[4797]: I0216 11:56:20.681395 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-64d4c9c779-ctrqz_8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd/proxy-server/0.log" Feb 16 11:56:20 crc kubenswrapper[4797]: I0216 11:56:20.830427 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-jj9d5_7b48cc2d-f411-40a8-81a8-e7fc66b9a30a/swift-ring-rebalance/0.log" Feb 16 11:56:20 crc kubenswrapper[4797]: I0216 11:56:20.921894 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/account-auditor/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.048652 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/account-reaper/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.091669 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/account-replicator/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.163260 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/container-auditor/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.187028 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/account-server/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.302344 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/container-replicator/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.324916 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/container-server/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.392706 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/container-updater/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.461792 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/object-auditor/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.526375 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/object-replicator/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.535453 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/object-expirer/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.647203 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/object-server/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.716888 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/object-updater/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.727414 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/rsync/0.log" Feb 16 11:56:21 crc kubenswrapper[4797]: I0216 11:56:21.815807 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_9f443541-845c-4fdd-b6d1-08aba5c39667/swift-recon-cron/0.log" Feb 16 11:56:24 crc kubenswrapper[4797]: I0216 11:56:24.753242 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_517059fd-92d8-4058-b426-5653912b7a41/memcached/0.log" Feb 16 11:56:28 crc kubenswrapper[4797]: E0216 11:56:28.984943 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:56:39 crc kubenswrapper[4797]: E0216 11:56:39.984550 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:56:44 crc kubenswrapper[4797]: I0216 11:56:44.945276 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-c2tk9_be2f5af9-52ca-4678-80c6-ad099ddbf8ff/manager/0.log" Feb 16 11:56:45 crc kubenswrapper[4797]: I0216 11:56:45.146443 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87_61f49bc4-6aee-43e9-8fc4-8380546e9da4/util/0.log" Feb 16 11:56:45 crc kubenswrapper[4797]: I0216 11:56:45.391875 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87_61f49bc4-6aee-43e9-8fc4-8380546e9da4/util/0.log" Feb 16 11:56:45 crc kubenswrapper[4797]: I0216 11:56:45.433770 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87_61f49bc4-6aee-43e9-8fc4-8380546e9da4/pull/0.log" Feb 16 11:56:45 crc kubenswrapper[4797]: I0216 11:56:45.569440 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87_61f49bc4-6aee-43e9-8fc4-8380546e9da4/pull/0.log" Feb 16 11:56:45 crc kubenswrapper[4797]: I0216 11:56:45.825269 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87_61f49bc4-6aee-43e9-8fc4-8380546e9da4/util/0.log" Feb 16 11:56:45 crc kubenswrapper[4797]: I0216 11:56:45.881213 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87_61f49bc4-6aee-43e9-8fc4-8380546e9da4/pull/0.log" Feb 16 11:56:46 crc kubenswrapper[4797]: I0216 11:56:46.062049 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_fb8767c25a457251b2669501481e586de5c4c83792e0dec9bfa5ebbd13vzl87_61f49bc4-6aee-43e9-8fc4-8380546e9da4/extract/0.log" Feb 16 11:56:46 crc kubenswrapper[4797]: I0216 11:56:46.213052 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-dn2rf_3439dee8-2272-41cc-8f20-1011e12202e8/manager/0.log" Feb 16 11:56:46 crc kubenswrapper[4797]: I0216 11:56:46.349786 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-tddsr_4b4c8cfc-5b6b-45cb-97f6-36c766aa6ad9/manager/0.log" Feb 16 11:56:46 crc kubenswrapper[4797]: I0216 11:56:46.438202 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-gp7jv_e6757076-86f7-48aa-87b3-27d275221210/manager/0.log" Feb 16 11:56:46 crc kubenswrapper[4797]: I0216 11:56:46.554212 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-llbkc_f2d64af8-fc1a-4a10-9e9d-ca65cb84dd0f/manager/0.log" Feb 16 11:56:46 crc kubenswrapper[4797]: I0216 11:56:46.923137 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-bmz7r_0c242ffd-e8a4-4f19-80e9-957c31876eb2/manager/0.log" Feb 16 11:56:47 crc kubenswrapper[4797]: I0216 11:56:47.029021 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-l6xg9_c5013d9b-4630-450f-80bf-312fbc3256ec/manager/0.log" Feb 16 11:56:47 crc kubenswrapper[4797]: I0216 11:56:47.227457 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-c4rfb_931bff49-5f65-49a0-8dab-c1b5858ec958/manager/0.log" Feb 16 11:56:47 crc kubenswrapper[4797]: I0216 11:56:47.270439 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-jcjc8_9e8f1871-1ed7-4ef9-8c88-901a64f13ccd/manager/0.log" Feb 16 11:56:47 crc kubenswrapper[4797]: I0216 11:56:47.473778 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-ctjqh_5ec1f813-5b71-4f97-919a-0414a1a7cb73/manager/0.log" Feb 16 11:56:47 crc kubenswrapper[4797]: I0216 11:56:47.711622 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-92zcz_0b0f4d9d-f30c-4981-87cf-1ea78972c784/manager/0.log" Feb 16 11:56:47 crc kubenswrapper[4797]: I0216 11:56:47.845509 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-rcgvv_60624e90-f529-495b-b523-fda5525b3404/manager/0.log" Feb 16 11:56:48 crc kubenswrapper[4797]: I0216 11:56:48.121491 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cgfvbg_4521c529-8b50-4fd0-8696-b1207798e1f5/manager/0.log" Feb 16 11:56:48 crc kubenswrapper[4797]: I0216 11:56:48.560299 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-55dffc8d68-qpjsk_01351148-9ca7-4227-a44d-144584794e6f/operator/0.log" Feb 16 11:56:48 crc kubenswrapper[4797]: I0216 11:56:48.872939 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-knqtd_92cbdcce-96ac-453d-88f4-67c63be7c272/registry-server/0.log" Feb 16 11:56:49 crc kubenswrapper[4797]: I0216 11:56:49.135498 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-q54xh_7745cf21-caab-4866-99f1-f2d819e779d3/manager/0.log" Feb 16 11:56:49 crc kubenswrapper[4797]: I0216 11:56:49.358006 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-czcgn_49b234d6-478d-44ec-9164-9482c3242ea2/manager/0.log" Feb 16 11:56:49 crc kubenswrapper[4797]: I0216 11:56:49.576847 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-c9frn_2322e8ef-0322-48a9-85fc-95345d68dea3/operator/0.log" Feb 16 11:56:49 crc kubenswrapper[4797]: I0216 11:56:49.626947 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-9tbc9_669a405d-b513-461b-9d3d-fe7938e08dec/manager/0.log" Feb 16 11:56:49 crc kubenswrapper[4797]: I0216 11:56:49.874648 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-pvbnl_ae9b635f-ae0c-4d62-9860-a9817b6d668e/manager/0.log" Feb 16 11:56:50 crc kubenswrapper[4797]: I0216 11:56:50.081861 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-rk7qz_c05e0068-d50b-459b-86ab-b076230093b6/manager/0.log" Feb 16 11:56:50 crc kubenswrapper[4797]: I0216 11:56:50.126156 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6b65fbbb9f-7rjfc_40a83645-f1ce-4393-90d1-7fb9d2144bfa/manager/0.log" Feb 16 11:56:50 crc kubenswrapper[4797]: I0216 11:56:50.410780 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-vxl6t_ca037f0f-8b30-4f77-b039-b4d92368af5a/manager/0.log" Feb 16 11:56:50 crc kubenswrapper[4797]: I0216 11:56:50.619035 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b85768bb-4k7fc_e88a6189-ad47-438a-baab-3dcc5d781126/manager/0.log" Feb 16 11:56:51 crc kubenswrapper[4797]: I0216 11:56:51.989042 4797 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 11:56:52 crc kubenswrapper[4797]: E0216 11:56:52.114004 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:56:52 crc kubenswrapper[4797]: E0216 11:56:52.114102 4797 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 11:56:52 crc kubenswrapper[4797]: E0216 11:56:52.114306 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fvxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dhgrw_openstack(895bed8d-c376-47ad-8fa6-3cf0f07399c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 11:56:52 crc kubenswrapper[4797]: E0216 11:56:52.115511 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:56:52 crc kubenswrapper[4797]: I0216 11:56:52.385771 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-5f2m4_ff2de9ed-5f7c-4cf3-80f0-f0b12901438f/manager/0.log" Feb 16 11:57:05 crc kubenswrapper[4797]: E0216 11:57:05.992340 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:57:10 crc kubenswrapper[4797]: I0216 11:57:10.662647 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-rgs6z_e80deaa4-4f1c-4a94-9bac-cd4244a7d369/control-plane-machine-set-operator/0.log" Feb 16 11:57:10 crc kubenswrapper[4797]: I0216 11:57:10.891516 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pvwfm_931e6a97-a601-42c3-8b62-ef08752cf75c/kube-rbac-proxy/0.log" Feb 16 11:57:10 crc kubenswrapper[4797]: I0216 11:57:10.893107 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-pvwfm_931e6a97-a601-42c3-8b62-ef08752cf75c/machine-api-operator/0.log" Feb 16 11:57:11 crc kubenswrapper[4797]: I0216 11:57:11.703515 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:57:11 crc kubenswrapper[4797]: I0216 11:57:11.703656 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:57:16 crc kubenswrapper[4797]: E0216 11:57:16.985422 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:57:25 crc kubenswrapper[4797]: I0216 11:57:25.346022 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-7p57w_d897848c-20a8-4efe-b2a0-60d5349f5cc0/cert-manager-controller/0.log" Feb 16 11:57:25 crc kubenswrapper[4797]: I0216 11:57:25.448906 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-48xwg_d63e62f5-bb87-459b-b430-fe6dccef3dd7/cert-manager-cainjector/0.log" Feb 16 11:57:25 crc kubenswrapper[4797]: I0216 11:57:25.539899 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-85h5w_ea140fff-b2af-47e6-beb4-3edc6c997e62/cert-manager-webhook/0.log" Feb 16 11:57:31 crc kubenswrapper[4797]: E0216 11:57:31.985384 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:57:38 crc kubenswrapper[4797]: I0216 11:57:38.729002 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-8j77n_93bb854d-0f24-4def-95d9-17a1efbd0afa/nmstate-console-plugin/0.log" Feb 16 11:57:38 crc kubenswrapper[4797]: I0216 11:57:38.894379 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bfllp_de893082-511b-4ef6-a57a-172c7f44f063/nmstate-handler/0.log" Feb 16 11:57:38 crc kubenswrapper[4797]: I0216 11:57:38.974643 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-tv8gr_4ee4a2b6-b1e2-43cb-9677-572351c9f2b6/kube-rbac-proxy/0.log" Feb 16 11:57:39 crc kubenswrapper[4797]: I0216 11:57:39.033733 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-tv8gr_4ee4a2b6-b1e2-43cb-9677-572351c9f2b6/nmstate-metrics/0.log" Feb 16 11:57:39 crc kubenswrapper[4797]: I0216 11:57:39.143397 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-qwfkw_c4eb86f5-1f11-4785-a785-aae078cac6f4/nmstate-operator/0.log" Feb 16 11:57:39 crc kubenswrapper[4797]: I0216 11:57:39.308077 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-cbtgj_d2646d1c-e478-4ffb-916e-8feb7e020022/nmstate-webhook/0.log" Feb 16 11:57:41 crc kubenswrapper[4797]: I0216 11:57:41.703138 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:57:41 crc kubenswrapper[4797]: I0216 11:57:41.703679 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:57:43 crc kubenswrapper[4797]: E0216 11:57:43.984645 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:57:53 crc kubenswrapper[4797]: I0216 11:57:53.123709 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-547985c4bd-snwnp_d81adcdb-f1e8-4f65-b501-18b104ad7a02/manager/0.log" Feb 16 11:57:53 crc kubenswrapper[4797]: I0216 11:57:53.227029 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-547985c4bd-snwnp_d81adcdb-f1e8-4f65-b501-18b104ad7a02/kube-rbac-proxy/0.log" Feb 16 11:57:56 crc kubenswrapper[4797]: E0216 11:57:56.985523 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:58:07 crc kubenswrapper[4797]: I0216 11:58:07.014789 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-m4hxc_6635912a-ab64-4ee0-8b76-17ed2b17a7cd/prometheus-operator/0.log" Feb 16 11:58:07 crc kubenswrapper[4797]: I0216 11:58:07.147423 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_f30f63d1-0224-458f-9dcb-c5ca305c5a10/prometheus-operator-admission-webhook/0.log" Feb 16 11:58:07 crc kubenswrapper[4797]: I0216 11:58:07.241368 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_a5d0ca59-5007-44d2-86e6-342c60bddb88/prometheus-operator-admission-webhook/0.log" Feb 16 11:58:07 crc kubenswrapper[4797]: I0216 11:58:07.326482 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7vsxb_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8/operator/0.log" Feb 16 11:58:07 crc kubenswrapper[4797]: I0216 11:58:07.406172 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-m98kn_f6001f2a-b067-4fe6-b250-bce9a306e7e6/perses-operator/0.log" Feb 16 11:58:08 crc kubenswrapper[4797]: E0216 11:58:08.987387 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:58:11 crc kubenswrapper[4797]: I0216 11:58:11.703746 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 11:58:11 crc kubenswrapper[4797]: I0216 11:58:11.704045 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 11:58:11 crc kubenswrapper[4797]: I0216 11:58:11.704087 4797 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" Feb 16 11:58:11 crc kubenswrapper[4797]: I0216 11:58:11.704812 4797 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa"} pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 11:58:11 crc kubenswrapper[4797]: I0216 11:58:11.704866 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" containerID="cri-o://3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" gracePeriod=600 Feb 16 11:58:11 crc kubenswrapper[4797]: E0216 11:58:11.832025 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:58:11 crc kubenswrapper[4797]: I0216 11:58:11.880685 4797 generic.go:334] "Generic (PLEG): container finished" podID="128f4e85-fd17-4281-97d2-872fda792b21" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" exitCode=0 Feb 16 11:58:11 crc kubenswrapper[4797]: I0216 11:58:11.880735 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerDied","Data":"3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa"} Feb 16 11:58:11 crc kubenswrapper[4797]: I0216 11:58:11.880771 4797 scope.go:117] "RemoveContainer" containerID="466afe6c2e87c1336c7e2ffc0baf6756f6e411f9783bf938aa2d97f93e10afd0" Feb 16 11:58:11 crc kubenswrapper[4797]: I0216 11:58:11.881241 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 11:58:11 crc kubenswrapper[4797]: E0216 11:58:11.881528 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:58:19 crc kubenswrapper[4797]: E0216 11:58:19.985093 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:58:22 crc kubenswrapper[4797]: I0216 11:58:22.703493 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-99x9f_99dd2e9f-adf7-4fe6-861b-d66125f5b08c/kube-rbac-proxy/0.log" Feb 16 11:58:22 crc kubenswrapper[4797]: I0216 11:58:22.723280 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-99x9f_99dd2e9f-adf7-4fe6-861b-d66125f5b08c/controller/0.log" Feb 16 11:58:22 crc kubenswrapper[4797]: I0216 11:58:22.892638 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/cp-frr-files/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.034603 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/cp-metrics/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.070253 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/cp-reloader/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.077918 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/cp-frr-files/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.148017 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/cp-reloader/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.294666 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/cp-reloader/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.300697 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/cp-metrics/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.319379 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/cp-frr-files/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.397616 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/cp-metrics/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.503705 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/cp-metrics/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.535205 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/cp-frr-files/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.540999 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/cp-reloader/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.617941 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/controller/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.702383 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/frr-metrics/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.822239 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/kube-rbac-proxy/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.831067 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/kube-rbac-proxy-frr/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.962059 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/reloader/0.log" Feb 16 11:58:23 crc kubenswrapper[4797]: I0216 11:58:23.983027 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 11:58:23 crc kubenswrapper[4797]: E0216 11:58:23.983290 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:58:24 crc kubenswrapper[4797]: I0216 11:58:24.084718 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-fl28v_a890db88-edf0-48b0-82e7-f83d8d762493/frr-k8s-webhook-server/0.log" Feb 16 11:58:24 crc kubenswrapper[4797]: I0216 11:58:24.325270 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7576dc79b7-85mbt_85a3876a-7599-42bc-871c-559ab66a672e/manager/0.log" Feb 16 11:58:24 crc kubenswrapper[4797]: I0216 11:58:24.415961 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-965c86b89-tdccp_0808430f-3807-401c-8e89-be026c69be52/webhook-server/0.log" Feb 16 11:58:24 crc kubenswrapper[4797]: I0216 11:58:24.605408 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q95br_b451686c-e089-48e1-82a2-1a889e465691/kube-rbac-proxy/0.log" Feb 16 11:58:24 crc kubenswrapper[4797]: I0216 11:58:24.688346 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-57ng7_ae064aa9-f20f-4271-80aa-4df1aa1ecd35/frr/0.log" Feb 16 11:58:25 crc kubenswrapper[4797]: I0216 11:58:25.038455 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-q95br_b451686c-e089-48e1-82a2-1a889e465691/speaker/0.log" Feb 16 11:58:32 crc kubenswrapper[4797]: E0216 11:58:32.984679 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:58:38 crc kubenswrapper[4797]: I0216 11:58:38.982951 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 11:58:38 crc kubenswrapper[4797]: E0216 11:58:38.983778 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:58:39 crc kubenswrapper[4797]: I0216 11:58:39.492495 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp_adde08ba-a127-4bfe-87fa-af192ad0a1de/util/0.log" Feb 16 11:58:39 crc kubenswrapper[4797]: I0216 11:58:39.958464 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp_adde08ba-a127-4bfe-87fa-af192ad0a1de/pull/0.log" Feb 16 11:58:39 crc kubenswrapper[4797]: I0216 11:58:39.982783 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp_adde08ba-a127-4bfe-87fa-af192ad0a1de/util/0.log" Feb 16 11:58:40 crc kubenswrapper[4797]: I0216 11:58:40.023220 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp_adde08ba-a127-4bfe-87fa-af192ad0a1de/pull/0.log" Feb 16 11:58:40 crc kubenswrapper[4797]: I0216 11:58:40.182440 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp_adde08ba-a127-4bfe-87fa-af192ad0a1de/extract/0.log" Feb 16 11:58:40 crc kubenswrapper[4797]: I0216 11:58:40.198331 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp_adde08ba-a127-4bfe-87fa-af192ad0a1de/pull/0.log" Feb 16 11:58:40 crc kubenswrapper[4797]: I0216 11:58:40.215172 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651m9rrp_adde08ba-a127-4bfe-87fa-af192ad0a1de/util/0.log" Feb 16 11:58:40 crc kubenswrapper[4797]: I0216 11:58:40.356134 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn_2375197b-bcee-4713-841d-26bf583e7502/util/0.log" Feb 16 11:58:40 crc kubenswrapper[4797]: I0216 11:58:40.520844 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn_2375197b-bcee-4713-841d-26bf583e7502/pull/0.log" Feb 16 11:58:40 crc kubenswrapper[4797]: I0216 11:58:40.526514 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn_2375197b-bcee-4713-841d-26bf583e7502/util/0.log" Feb 16 11:58:40 crc kubenswrapper[4797]: I0216 11:58:40.534250 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn_2375197b-bcee-4713-841d-26bf583e7502/pull/0.log" Feb 16 11:58:40 crc kubenswrapper[4797]: I0216 11:58:40.734908 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn_2375197b-bcee-4713-841d-26bf583e7502/pull/0.log" Feb 16 11:58:40 crc kubenswrapper[4797]: I0216 11:58:40.736951 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn_2375197b-bcee-4713-841d-26bf583e7502/extract/0.log" Feb 16 11:58:40 crc kubenswrapper[4797]: I0216 11:58:40.742060 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089cjcn_2375197b-bcee-4713-841d-26bf583e7502/util/0.log" Feb 16 11:58:40 crc kubenswrapper[4797]: I0216 11:58:40.937236 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n_4637f00b-8997-47b5-8164-e0ee843a75bd/util/0.log" Feb 16 11:58:41 crc kubenswrapper[4797]: I0216 11:58:41.085800 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n_4637f00b-8997-47b5-8164-e0ee843a75bd/util/0.log" Feb 16 11:58:41 crc kubenswrapper[4797]: I0216 11:58:41.096531 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n_4637f00b-8997-47b5-8164-e0ee843a75bd/pull/0.log" Feb 16 11:58:41 crc kubenswrapper[4797]: I0216 11:58:41.140970 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n_4637f00b-8997-47b5-8164-e0ee843a75bd/pull/0.log" Feb 16 11:58:41 crc kubenswrapper[4797]: I0216 11:58:41.341035 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n_4637f00b-8997-47b5-8164-e0ee843a75bd/util/0.log" Feb 16 11:58:41 crc kubenswrapper[4797]: I0216 11:58:41.347269 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n_4637f00b-8997-47b5-8164-e0ee843a75bd/pull/0.log" Feb 16 11:58:41 crc kubenswrapper[4797]: I0216 11:58:41.357382 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213pzm4n_4637f00b-8997-47b5-8164-e0ee843a75bd/extract/0.log" Feb 16 11:58:41 crc kubenswrapper[4797]: I0216 11:58:41.539816 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8fw4k_b29a8611-590f-4899-aab6-1c60031e24ad/extract-utilities/0.log" Feb 16 11:58:41 crc kubenswrapper[4797]: I0216 11:58:41.750437 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8fw4k_b29a8611-590f-4899-aab6-1c60031e24ad/extract-content/0.log" Feb 16 11:58:41 crc kubenswrapper[4797]: I0216 11:58:41.787639 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8fw4k_b29a8611-590f-4899-aab6-1c60031e24ad/extract-content/0.log" Feb 16 11:58:41 crc kubenswrapper[4797]: I0216 11:58:41.789258 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8fw4k_b29a8611-590f-4899-aab6-1c60031e24ad/extract-utilities/0.log" Feb 16 11:58:41 crc kubenswrapper[4797]: I0216 11:58:41.961854 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8fw4k_b29a8611-590f-4899-aab6-1c60031e24ad/extract-utilities/0.log" Feb 16 11:58:41 crc kubenswrapper[4797]: I0216 11:58:41.975288 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8fw4k_b29a8611-590f-4899-aab6-1c60031e24ad/extract-content/0.log" Feb 16 11:58:42 crc kubenswrapper[4797]: I0216 11:58:42.182274 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gnspr_8785aee1-a170-4747-bb85-ddd8653c51d2/extract-utilities/0.log" Feb 16 11:58:42 crc kubenswrapper[4797]: I0216 11:58:42.371596 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8fw4k_b29a8611-590f-4899-aab6-1c60031e24ad/registry-server/0.log" Feb 16 11:58:42 crc kubenswrapper[4797]: I0216 11:58:42.409914 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gnspr_8785aee1-a170-4747-bb85-ddd8653c51d2/extract-content/0.log" Feb 16 11:58:42 crc kubenswrapper[4797]: I0216 11:58:42.463083 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gnspr_8785aee1-a170-4747-bb85-ddd8653c51d2/extract-content/0.log" Feb 16 11:58:42 crc kubenswrapper[4797]: I0216 11:58:42.489272 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gnspr_8785aee1-a170-4747-bb85-ddd8653c51d2/extract-utilities/0.log" Feb 16 11:58:42 crc kubenswrapper[4797]: I0216 11:58:42.654966 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gnspr_8785aee1-a170-4747-bb85-ddd8653c51d2/extract-utilities/0.log" Feb 16 11:58:42 crc kubenswrapper[4797]: I0216 11:58:42.699793 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gnspr_8785aee1-a170-4747-bb85-ddd8653c51d2/extract-content/0.log" Feb 16 11:58:42 crc kubenswrapper[4797]: I0216 11:58:42.912753 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k_d2dd227f-1ab6-480b-9f5d-5957cfba1d30/util/0.log" Feb 16 11:58:43 crc kubenswrapper[4797]: I0216 11:58:43.119460 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gnspr_8785aee1-a170-4747-bb85-ddd8653c51d2/registry-server/0.log" Feb 16 11:58:43 crc kubenswrapper[4797]: I0216 11:58:43.166459 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k_d2dd227f-1ab6-480b-9f5d-5957cfba1d30/pull/0.log" Feb 16 11:58:43 crc kubenswrapper[4797]: I0216 11:58:43.166761 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k_d2dd227f-1ab6-480b-9f5d-5957cfba1d30/pull/0.log" Feb 16 11:58:43 crc kubenswrapper[4797]: I0216 11:58:43.174827 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k_d2dd227f-1ab6-480b-9f5d-5957cfba1d30/util/0.log" Feb 16 11:58:43 crc kubenswrapper[4797]: I0216 11:58:43.371385 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k_d2dd227f-1ab6-480b-9f5d-5957cfba1d30/util/0.log" Feb 16 11:58:43 crc kubenswrapper[4797]: I0216 11:58:43.373643 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k_d2dd227f-1ab6-480b-9f5d-5957cfba1d30/pull/0.log" Feb 16 11:58:43 crc kubenswrapper[4797]: I0216 11:58:43.419313 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecapwm4k_d2dd227f-1ab6-480b-9f5d-5957cfba1d30/extract/0.log" Feb 16 11:58:43 crc kubenswrapper[4797]: I0216 11:58:43.615055 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mnplw_31217ef5-71b8-4a30-b0c4-f5cd8a51e372/marketplace-operator/0.log" Feb 16 11:58:43 crc kubenswrapper[4797]: I0216 11:58:43.620242 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pjpsv_4698d1d1-b33e-4ede-bd45-ac6adf4d64a4/extract-utilities/0.log" Feb 16 11:58:43 crc kubenswrapper[4797]: I0216 11:58:43.783682 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pjpsv_4698d1d1-b33e-4ede-bd45-ac6adf4d64a4/extract-utilities/0.log" Feb 16 11:58:43 crc kubenswrapper[4797]: I0216 11:58:43.800387 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pjpsv_4698d1d1-b33e-4ede-bd45-ac6adf4d64a4/extract-content/0.log" Feb 16 11:58:43 crc kubenswrapper[4797]: I0216 11:58:43.803835 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pjpsv_4698d1d1-b33e-4ede-bd45-ac6adf4d64a4/extract-content/0.log" Feb 16 11:58:44 crc kubenswrapper[4797]: I0216 11:58:44.015105 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pjpsv_4698d1d1-b33e-4ede-bd45-ac6adf4d64a4/extract-content/0.log" Feb 16 11:58:44 crc kubenswrapper[4797]: I0216 11:58:44.023960 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pjpsv_4698d1d1-b33e-4ede-bd45-ac6adf4d64a4/extract-utilities/0.log" Feb 16 11:58:44 crc kubenswrapper[4797]: I0216 11:58:44.069263 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwk2z_0db1d294-337b-4051-b922-c7c3270426f2/extract-utilities/0.log" Feb 16 11:58:44 crc kubenswrapper[4797]: I0216 11:58:44.139803 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pjpsv_4698d1d1-b33e-4ede-bd45-ac6adf4d64a4/registry-server/0.log" Feb 16 11:58:44 crc kubenswrapper[4797]: I0216 11:58:44.224293 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwk2z_0db1d294-337b-4051-b922-c7c3270426f2/extract-content/0.log" Feb 16 11:58:44 crc kubenswrapper[4797]: I0216 11:58:44.398020 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwk2z_0db1d294-337b-4051-b922-c7c3270426f2/extract-utilities/0.log" Feb 16 11:58:44 crc kubenswrapper[4797]: I0216 11:58:44.432810 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwk2z_0db1d294-337b-4051-b922-c7c3270426f2/extract-content/0.log" Feb 16 11:58:44 crc kubenswrapper[4797]: I0216 11:58:44.564123 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwk2z_0db1d294-337b-4051-b922-c7c3270426f2/extract-content/0.log" Feb 16 11:58:44 crc kubenswrapper[4797]: I0216 11:58:44.587225 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwk2z_0db1d294-337b-4051-b922-c7c3270426f2/extract-utilities/0.log" Feb 16 11:58:44 crc kubenswrapper[4797]: I0216 11:58:44.943003 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pwk2z_0db1d294-337b-4051-b922-c7c3270426f2/registry-server/0.log" Feb 16 11:58:46 crc kubenswrapper[4797]: E0216 11:58:46.984528 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:58:50 crc kubenswrapper[4797]: I0216 11:58:50.982352 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 11:58:50 crc kubenswrapper[4797]: E0216 11:58:50.983631 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:58:58 crc kubenswrapper[4797]: I0216 11:58:58.824345 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-65f76847bb-259w6_f30f63d1-0224-458f-9dcb-c5ca305c5a10/prometheus-operator-admission-webhook/0.log" Feb 16 11:58:58 crc kubenswrapper[4797]: I0216 11:58:58.835004 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-m4hxc_6635912a-ab64-4ee0-8b76-17ed2b17a7cd/prometheus-operator/0.log" Feb 16 11:58:58 crc kubenswrapper[4797]: I0216 11:58:58.863765 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-65f76847bb-xfpmb_a5d0ca59-5007-44d2-86e6-342c60bddb88/prometheus-operator-admission-webhook/0.log" Feb 16 11:58:58 crc kubenswrapper[4797]: E0216 11:58:58.984531 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:58:59 crc kubenswrapper[4797]: I0216 11:58:59.009485 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-m98kn_f6001f2a-b067-4fe6-b250-bce9a306e7e6/perses-operator/0.log" Feb 16 11:58:59 crc kubenswrapper[4797]: I0216 11:58:59.037679 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-7vsxb_c6ebc66d-6ff0-4bbf-9578-b4d5a1dce1b8/operator/0.log" Feb 16 11:59:04 crc kubenswrapper[4797]: I0216 11:59:04.983843 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 11:59:04 crc kubenswrapper[4797]: E0216 11:59:04.985047 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:59:09 crc kubenswrapper[4797]: E0216 11:59:09.984432 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:59:12 crc kubenswrapper[4797]: I0216 11:59:12.937510 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-547985c4bd-snwnp_d81adcdb-f1e8-4f65-b501-18b104ad7a02/kube-rbac-proxy/0.log" Feb 16 11:59:13 crc kubenswrapper[4797]: I0216 11:59:13.044681 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-547985c4bd-snwnp_d81adcdb-f1e8-4f65-b501-18b104ad7a02/manager/0.log" Feb 16 11:59:19 crc kubenswrapper[4797]: I0216 11:59:19.982945 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 11:59:19 crc kubenswrapper[4797]: E0216 11:59:19.983819 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:59:23 crc kubenswrapper[4797]: E0216 11:59:23.984442 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:59:30 crc kubenswrapper[4797]: I0216 11:59:30.217257 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-64d4c9c779-ctrqz" podUID="8ccabb0a-9a9d-4535-9075-b9ff8bc59dbd" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 16 11:59:30 crc kubenswrapper[4797]: I0216 11:59:30.983650 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 11:59:30 crc kubenswrapper[4797]: E0216 11:59:30.984110 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:59:34 crc kubenswrapper[4797]: E0216 11:59:34.985726 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:59:37 crc kubenswrapper[4797]: E0216 11:59:37.884218 4797 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.192:43336->38.102.83.192:41817: write tcp 38.102.83.192:43336->38.102.83.192:41817: write: broken pipe Feb 16 11:59:42 crc kubenswrapper[4797]: I0216 11:59:42.983263 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 11:59:42 crc kubenswrapper[4797]: E0216 11:59:42.985645 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:59:47 crc kubenswrapper[4797]: E0216 11:59:47.985179 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.743376 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pgbm5"] Feb 16 11:59:53 crc kubenswrapper[4797]: E0216 11:59:53.745896 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dba50bfe-9eff-4dc9-971d-aa16d5ebe85d" containerName="container-00" Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.745914 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="dba50bfe-9eff-4dc9-971d-aa16d5ebe85d" containerName="container-00" Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.746213 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="dba50bfe-9eff-4dc9-971d-aa16d5ebe85d" containerName="container-00" Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.748198 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgbm5" Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.751090 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pgbm5"] Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.804164 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80aae4f7-4253-40ec-918f-6b67b6439e4d-utilities\") pod \"community-operators-pgbm5\" (UID: \"80aae4f7-4253-40ec-918f-6b67b6439e4d\") " pod="openshift-marketplace/community-operators-pgbm5" Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.804409 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjv2j\" (UniqueName: \"kubernetes.io/projected/80aae4f7-4253-40ec-918f-6b67b6439e4d-kube-api-access-pjv2j\") pod \"community-operators-pgbm5\" (UID: \"80aae4f7-4253-40ec-918f-6b67b6439e4d\") " pod="openshift-marketplace/community-operators-pgbm5" Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.804625 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80aae4f7-4253-40ec-918f-6b67b6439e4d-catalog-content\") pod \"community-operators-pgbm5\" (UID: \"80aae4f7-4253-40ec-918f-6b67b6439e4d\") " pod="openshift-marketplace/community-operators-pgbm5" Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.907835 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjv2j\" (UniqueName: \"kubernetes.io/projected/80aae4f7-4253-40ec-918f-6b67b6439e4d-kube-api-access-pjv2j\") pod \"community-operators-pgbm5\" (UID: \"80aae4f7-4253-40ec-918f-6b67b6439e4d\") " pod="openshift-marketplace/community-operators-pgbm5" Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.908009 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80aae4f7-4253-40ec-918f-6b67b6439e4d-catalog-content\") pod \"community-operators-pgbm5\" (UID: \"80aae4f7-4253-40ec-918f-6b67b6439e4d\") " pod="openshift-marketplace/community-operators-pgbm5" Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.908228 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80aae4f7-4253-40ec-918f-6b67b6439e4d-utilities\") pod \"community-operators-pgbm5\" (UID: \"80aae4f7-4253-40ec-918f-6b67b6439e4d\") " pod="openshift-marketplace/community-operators-pgbm5" Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.909075 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80aae4f7-4253-40ec-918f-6b67b6439e4d-utilities\") pod \"community-operators-pgbm5\" (UID: \"80aae4f7-4253-40ec-918f-6b67b6439e4d\") " pod="openshift-marketplace/community-operators-pgbm5" Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.909965 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80aae4f7-4253-40ec-918f-6b67b6439e4d-catalog-content\") pod \"community-operators-pgbm5\" (UID: \"80aae4f7-4253-40ec-918f-6b67b6439e4d\") " pod="openshift-marketplace/community-operators-pgbm5" Feb 16 11:59:53 crc kubenswrapper[4797]: I0216 11:59:53.938777 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjv2j\" (UniqueName: \"kubernetes.io/projected/80aae4f7-4253-40ec-918f-6b67b6439e4d-kube-api-access-pjv2j\") pod \"community-operators-pgbm5\" (UID: \"80aae4f7-4253-40ec-918f-6b67b6439e4d\") " pod="openshift-marketplace/community-operators-pgbm5" Feb 16 11:59:54 crc kubenswrapper[4797]: I0216 11:59:54.080357 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgbm5" Feb 16 11:59:54 crc kubenswrapper[4797]: W0216 11:59:54.663725 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80aae4f7_4253_40ec_918f_6b67b6439e4d.slice/crio-bd1a646ad75649b4fadd94b0985f8f71e0540ea12a12b641ca1e31aa23bc339e WatchSource:0}: Error finding container bd1a646ad75649b4fadd94b0985f8f71e0540ea12a12b641ca1e31aa23bc339e: Status 404 returned error can't find the container with id bd1a646ad75649b4fadd94b0985f8f71e0540ea12a12b641ca1e31aa23bc339e Feb 16 11:59:54 crc kubenswrapper[4797]: I0216 11:59:54.668550 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pgbm5"] Feb 16 11:59:54 crc kubenswrapper[4797]: I0216 11:59:54.930866 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgbm5" event={"ID":"80aae4f7-4253-40ec-918f-6b67b6439e4d","Type":"ContainerStarted","Data":"a45abd22da647533be3acc04031c750dc7693678dc2a0c0e4c4a7141de8e5bef"} Feb 16 11:59:54 crc kubenswrapper[4797]: I0216 11:59:54.931223 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgbm5" event={"ID":"80aae4f7-4253-40ec-918f-6b67b6439e4d","Type":"ContainerStarted","Data":"bd1a646ad75649b4fadd94b0985f8f71e0540ea12a12b641ca1e31aa23bc339e"} Feb 16 11:59:55 crc kubenswrapper[4797]: I0216 11:59:55.942190 4797 generic.go:334] "Generic (PLEG): container finished" podID="80aae4f7-4253-40ec-918f-6b67b6439e4d" containerID="a45abd22da647533be3acc04031c750dc7693678dc2a0c0e4c4a7141de8e5bef" exitCode=0 Feb 16 11:59:55 crc kubenswrapper[4797]: I0216 11:59:55.942470 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgbm5" event={"ID":"80aae4f7-4253-40ec-918f-6b67b6439e4d","Type":"ContainerDied","Data":"a45abd22da647533be3acc04031c750dc7693678dc2a0c0e4c4a7141de8e5bef"} Feb 16 11:59:56 crc kubenswrapper[4797]: I0216 11:59:56.963314 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgbm5" event={"ID":"80aae4f7-4253-40ec-918f-6b67b6439e4d","Type":"ContainerStarted","Data":"e8934134ac3ea83bdf3883d4b6ae7fa28f270d7780b526004e1e85e6615e31cb"} Feb 16 11:59:57 crc kubenswrapper[4797]: I0216 11:59:57.985914 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 11:59:57 crc kubenswrapper[4797]: E0216 11:59:57.986391 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 11:59:58 crc kubenswrapper[4797]: I0216 11:59:58.981627 4797 generic.go:334] "Generic (PLEG): container finished" podID="80aae4f7-4253-40ec-918f-6b67b6439e4d" containerID="e8934134ac3ea83bdf3883d4b6ae7fa28f270d7780b526004e1e85e6615e31cb" exitCode=0 Feb 16 11:59:58 crc kubenswrapper[4797]: I0216 11:59:58.981693 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgbm5" event={"ID":"80aae4f7-4253-40ec-918f-6b67b6439e4d","Type":"ContainerDied","Data":"e8934134ac3ea83bdf3883d4b6ae7fa28f270d7780b526004e1e85e6615e31cb"} Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.002849 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgbm5" event={"ID":"80aae4f7-4253-40ec-918f-6b67b6439e4d","Type":"ContainerStarted","Data":"18f71fec9c4ae340d5ae6e3f59ed539ef0344df5d6a4433873e15b983e3e936d"} Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.050625 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pgbm5" podStartSLOduration=3.63969112 podStartE2EDuration="7.050561178s" podCreationTimestamp="2026-02-16 11:59:53 +0000 UTC" firstStartedPulling="2026-02-16 11:59:55.945322006 +0000 UTC m=+3190.665506986" lastFinishedPulling="2026-02-16 11:59:59.356192064 +0000 UTC m=+3194.076377044" observedRunningTime="2026-02-16 12:00:00.042086197 +0000 UTC m=+3194.762271177" watchObservedRunningTime="2026-02-16 12:00:00.050561178 +0000 UTC m=+3194.770746189" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.145070 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff"] Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.147134 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.151502 4797 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.152278 4797 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.160942 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff"] Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.275275 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f3dde03-7d95-40f7-94d3-d94bafa6e551-config-volume\") pod \"collect-profiles-29520720-j9gff\" (UID: \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.275561 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f3dde03-7d95-40f7-94d3-d94bafa6e551-secret-volume\") pod \"collect-profiles-29520720-j9gff\" (UID: \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.275652 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m48rp\" (UniqueName: \"kubernetes.io/projected/7f3dde03-7d95-40f7-94d3-d94bafa6e551-kube-api-access-m48rp\") pod \"collect-profiles-29520720-j9gff\" (UID: \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.377862 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f3dde03-7d95-40f7-94d3-d94bafa6e551-secret-volume\") pod \"collect-profiles-29520720-j9gff\" (UID: \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.377960 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m48rp\" (UniqueName: \"kubernetes.io/projected/7f3dde03-7d95-40f7-94d3-d94bafa6e551-kube-api-access-m48rp\") pod \"collect-profiles-29520720-j9gff\" (UID: \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.378007 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f3dde03-7d95-40f7-94d3-d94bafa6e551-config-volume\") pod \"collect-profiles-29520720-j9gff\" (UID: \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.379265 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f3dde03-7d95-40f7-94d3-d94bafa6e551-config-volume\") pod \"collect-profiles-29520720-j9gff\" (UID: \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.388311 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f3dde03-7d95-40f7-94d3-d94bafa6e551-secret-volume\") pod \"collect-profiles-29520720-j9gff\" (UID: \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.402566 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m48rp\" (UniqueName: \"kubernetes.io/projected/7f3dde03-7d95-40f7-94d3-d94bafa6e551-kube-api-access-m48rp\") pod \"collect-profiles-29520720-j9gff\" (UID: \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.473854 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" Feb 16 12:00:00 crc kubenswrapper[4797]: I0216 12:00:00.929798 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff"] Feb 16 12:00:00 crc kubenswrapper[4797]: W0216 12:00:00.943409 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f3dde03_7d95_40f7_94d3_d94bafa6e551.slice/crio-43da5704cd4692489a00d8bea09ed30d89a78dfe14f59c281e12e87fb2507bb1 WatchSource:0}: Error finding container 43da5704cd4692489a00d8bea09ed30d89a78dfe14f59c281e12e87fb2507bb1: Status 404 returned error can't find the container with id 43da5704cd4692489a00d8bea09ed30d89a78dfe14f59c281e12e87fb2507bb1 Feb 16 12:00:01 crc kubenswrapper[4797]: I0216 12:00:01.020381 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" event={"ID":"7f3dde03-7d95-40f7-94d3-d94bafa6e551","Type":"ContainerStarted","Data":"43da5704cd4692489a00d8bea09ed30d89a78dfe14f59c281e12e87fb2507bb1"} Feb 16 12:00:01 crc kubenswrapper[4797]: E0216 12:00:01.985211 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:00:02 crc kubenswrapper[4797]: I0216 12:00:02.031903 4797 generic.go:334] "Generic (PLEG): container finished" podID="7f3dde03-7d95-40f7-94d3-d94bafa6e551" containerID="e75537b389b7fe0135ce7f31883919d8b2be9c65093c9ac9ecd184d9c3342852" exitCode=0 Feb 16 12:00:02 crc kubenswrapper[4797]: I0216 12:00:02.031982 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" event={"ID":"7f3dde03-7d95-40f7-94d3-d94bafa6e551","Type":"ContainerDied","Data":"e75537b389b7fe0135ce7f31883919d8b2be9c65093c9ac9ecd184d9c3342852"} Feb 16 12:00:03 crc kubenswrapper[4797]: I0216 12:00:03.425597 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" Feb 16 12:00:03 crc kubenswrapper[4797]: I0216 12:00:03.549968 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f3dde03-7d95-40f7-94d3-d94bafa6e551-config-volume\") pod \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\" (UID: \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\") " Feb 16 12:00:03 crc kubenswrapper[4797]: I0216 12:00:03.550196 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f3dde03-7d95-40f7-94d3-d94bafa6e551-secret-volume\") pod \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\" (UID: \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\") " Feb 16 12:00:03 crc kubenswrapper[4797]: I0216 12:00:03.550258 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m48rp\" (UniqueName: \"kubernetes.io/projected/7f3dde03-7d95-40f7-94d3-d94bafa6e551-kube-api-access-m48rp\") pod \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\" (UID: \"7f3dde03-7d95-40f7-94d3-d94bafa6e551\") " Feb 16 12:00:03 crc kubenswrapper[4797]: I0216 12:00:03.550892 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f3dde03-7d95-40f7-94d3-d94bafa6e551-config-volume" (OuterVolumeSpecName: "config-volume") pod "7f3dde03-7d95-40f7-94d3-d94bafa6e551" (UID: "7f3dde03-7d95-40f7-94d3-d94bafa6e551"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 12:00:03 crc kubenswrapper[4797]: I0216 12:00:03.557075 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f3dde03-7d95-40f7-94d3-d94bafa6e551-kube-api-access-m48rp" (OuterVolumeSpecName: "kube-api-access-m48rp") pod "7f3dde03-7d95-40f7-94d3-d94bafa6e551" (UID: "7f3dde03-7d95-40f7-94d3-d94bafa6e551"). InnerVolumeSpecName "kube-api-access-m48rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:00:03 crc kubenswrapper[4797]: I0216 12:00:03.558185 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3dde03-7d95-40f7-94d3-d94bafa6e551-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7f3dde03-7d95-40f7-94d3-d94bafa6e551" (UID: "7f3dde03-7d95-40f7-94d3-d94bafa6e551"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:00:03 crc kubenswrapper[4797]: I0216 12:00:03.653489 4797 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f3dde03-7d95-40f7-94d3-d94bafa6e551-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 12:00:03 crc kubenswrapper[4797]: I0216 12:00:03.653548 4797 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f3dde03-7d95-40f7-94d3-d94bafa6e551-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 12:00:03 crc kubenswrapper[4797]: I0216 12:00:03.653608 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m48rp\" (UniqueName: \"kubernetes.io/projected/7f3dde03-7d95-40f7-94d3-d94bafa6e551-kube-api-access-m48rp\") on node \"crc\" DevicePath \"\"" Feb 16 12:00:04 crc kubenswrapper[4797]: I0216 12:00:04.054985 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" event={"ID":"7f3dde03-7d95-40f7-94d3-d94bafa6e551","Type":"ContainerDied","Data":"43da5704cd4692489a00d8bea09ed30d89a78dfe14f59c281e12e87fb2507bb1"} Feb 16 12:00:04 crc kubenswrapper[4797]: I0216 12:00:04.055023 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520720-j9gff" Feb 16 12:00:04 crc kubenswrapper[4797]: I0216 12:00:04.055030 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43da5704cd4692489a00d8bea09ed30d89a78dfe14f59c281e12e87fb2507bb1" Feb 16 12:00:04 crc kubenswrapper[4797]: I0216 12:00:04.081477 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pgbm5" Feb 16 12:00:04 crc kubenswrapper[4797]: I0216 12:00:04.081516 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pgbm5" Feb 16 12:00:04 crc kubenswrapper[4797]: I0216 12:00:04.132708 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pgbm5" Feb 16 12:00:04 crc kubenswrapper[4797]: I0216 12:00:04.538975 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k"] Feb 16 12:00:04 crc kubenswrapper[4797]: I0216 12:00:04.550565 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520675-wfk5k"] Feb 16 12:00:05 crc kubenswrapper[4797]: I0216 12:00:05.123565 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pgbm5" Feb 16 12:00:05 crc kubenswrapper[4797]: I0216 12:00:05.188132 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pgbm5"] Feb 16 12:00:06 crc kubenswrapper[4797]: I0216 12:00:06.002223 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75fdce59-c937-4565-b49a-1668d2504c37" path="/var/lib/kubelet/pods/75fdce59-c937-4565-b49a-1668d2504c37/volumes" Feb 16 12:00:06 crc kubenswrapper[4797]: I0216 12:00:06.359758 4797 scope.go:117] "RemoveContainer" containerID="048fefc750681fb0c59c2b58f45632528e46964cc9146e8ae6cf000c3a699230" Feb 16 12:00:07 crc kubenswrapper[4797]: I0216 12:00:07.090141 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pgbm5" podUID="80aae4f7-4253-40ec-918f-6b67b6439e4d" containerName="registry-server" containerID="cri-o://18f71fec9c4ae340d5ae6e3f59ed539ef0344df5d6a4433873e15b983e3e936d" gracePeriod=2 Feb 16 12:00:07 crc kubenswrapper[4797]: I0216 12:00:07.624306 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgbm5" Feb 16 12:00:07 crc kubenswrapper[4797]: I0216 12:00:07.737425 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80aae4f7-4253-40ec-918f-6b67b6439e4d-catalog-content\") pod \"80aae4f7-4253-40ec-918f-6b67b6439e4d\" (UID: \"80aae4f7-4253-40ec-918f-6b67b6439e4d\") " Feb 16 12:00:07 crc kubenswrapper[4797]: I0216 12:00:07.737741 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjv2j\" (UniqueName: \"kubernetes.io/projected/80aae4f7-4253-40ec-918f-6b67b6439e4d-kube-api-access-pjv2j\") pod \"80aae4f7-4253-40ec-918f-6b67b6439e4d\" (UID: \"80aae4f7-4253-40ec-918f-6b67b6439e4d\") " Feb 16 12:00:07 crc kubenswrapper[4797]: I0216 12:00:07.737939 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80aae4f7-4253-40ec-918f-6b67b6439e4d-utilities\") pod \"80aae4f7-4253-40ec-918f-6b67b6439e4d\" (UID: \"80aae4f7-4253-40ec-918f-6b67b6439e4d\") " Feb 16 12:00:07 crc kubenswrapper[4797]: I0216 12:00:07.739084 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80aae4f7-4253-40ec-918f-6b67b6439e4d-utilities" (OuterVolumeSpecName: "utilities") pod "80aae4f7-4253-40ec-918f-6b67b6439e4d" (UID: "80aae4f7-4253-40ec-918f-6b67b6439e4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:00:07 crc kubenswrapper[4797]: I0216 12:00:07.744847 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80aae4f7-4253-40ec-918f-6b67b6439e4d-kube-api-access-pjv2j" (OuterVolumeSpecName: "kube-api-access-pjv2j") pod "80aae4f7-4253-40ec-918f-6b67b6439e4d" (UID: "80aae4f7-4253-40ec-918f-6b67b6439e4d"). InnerVolumeSpecName "kube-api-access-pjv2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:00:07 crc kubenswrapper[4797]: I0216 12:00:07.795720 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80aae4f7-4253-40ec-918f-6b67b6439e4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80aae4f7-4253-40ec-918f-6b67b6439e4d" (UID: "80aae4f7-4253-40ec-918f-6b67b6439e4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:00:07 crc kubenswrapper[4797]: I0216 12:00:07.840519 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80aae4f7-4253-40ec-918f-6b67b6439e4d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:00:07 crc kubenswrapper[4797]: I0216 12:00:07.840551 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjv2j\" (UniqueName: \"kubernetes.io/projected/80aae4f7-4253-40ec-918f-6b67b6439e4d-kube-api-access-pjv2j\") on node \"crc\" DevicePath \"\"" Feb 16 12:00:07 crc kubenswrapper[4797]: I0216 12:00:07.840562 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80aae4f7-4253-40ec-918f-6b67b6439e4d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.102742 4797 generic.go:334] "Generic (PLEG): container finished" podID="80aae4f7-4253-40ec-918f-6b67b6439e4d" containerID="18f71fec9c4ae340d5ae6e3f59ed539ef0344df5d6a4433873e15b983e3e936d" exitCode=0 Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.102788 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgbm5" event={"ID":"80aae4f7-4253-40ec-918f-6b67b6439e4d","Type":"ContainerDied","Data":"18f71fec9c4ae340d5ae6e3f59ed539ef0344df5d6a4433873e15b983e3e936d"} Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.102822 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgbm5" event={"ID":"80aae4f7-4253-40ec-918f-6b67b6439e4d","Type":"ContainerDied","Data":"bd1a646ad75649b4fadd94b0985f8f71e0540ea12a12b641ca1e31aa23bc339e"} Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.102841 4797 scope.go:117] "RemoveContainer" containerID="18f71fec9c4ae340d5ae6e3f59ed539ef0344df5d6a4433873e15b983e3e936d" Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.102981 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgbm5" Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.137460 4797 scope.go:117] "RemoveContainer" containerID="e8934134ac3ea83bdf3883d4b6ae7fa28f270d7780b526004e1e85e6615e31cb" Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.150474 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pgbm5"] Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.166163 4797 scope.go:117] "RemoveContainer" containerID="a45abd22da647533be3acc04031c750dc7693678dc2a0c0e4c4a7141de8e5bef" Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.171096 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pgbm5"] Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.206340 4797 scope.go:117] "RemoveContainer" containerID="18f71fec9c4ae340d5ae6e3f59ed539ef0344df5d6a4433873e15b983e3e936d" Feb 16 12:00:08 crc kubenswrapper[4797]: E0216 12:00:08.206787 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18f71fec9c4ae340d5ae6e3f59ed539ef0344df5d6a4433873e15b983e3e936d\": container with ID starting with 18f71fec9c4ae340d5ae6e3f59ed539ef0344df5d6a4433873e15b983e3e936d not found: ID does not exist" containerID="18f71fec9c4ae340d5ae6e3f59ed539ef0344df5d6a4433873e15b983e3e936d" Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.206847 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18f71fec9c4ae340d5ae6e3f59ed539ef0344df5d6a4433873e15b983e3e936d"} err="failed to get container status \"18f71fec9c4ae340d5ae6e3f59ed539ef0344df5d6a4433873e15b983e3e936d\": rpc error: code = NotFound desc = could not find container \"18f71fec9c4ae340d5ae6e3f59ed539ef0344df5d6a4433873e15b983e3e936d\": container with ID starting with 18f71fec9c4ae340d5ae6e3f59ed539ef0344df5d6a4433873e15b983e3e936d not found: ID does not exist" Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.206883 4797 scope.go:117] "RemoveContainer" containerID="e8934134ac3ea83bdf3883d4b6ae7fa28f270d7780b526004e1e85e6615e31cb" Feb 16 12:00:08 crc kubenswrapper[4797]: E0216 12:00:08.207182 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8934134ac3ea83bdf3883d4b6ae7fa28f270d7780b526004e1e85e6615e31cb\": container with ID starting with e8934134ac3ea83bdf3883d4b6ae7fa28f270d7780b526004e1e85e6615e31cb not found: ID does not exist" containerID="e8934134ac3ea83bdf3883d4b6ae7fa28f270d7780b526004e1e85e6615e31cb" Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.207216 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8934134ac3ea83bdf3883d4b6ae7fa28f270d7780b526004e1e85e6615e31cb"} err="failed to get container status \"e8934134ac3ea83bdf3883d4b6ae7fa28f270d7780b526004e1e85e6615e31cb\": rpc error: code = NotFound desc = could not find container \"e8934134ac3ea83bdf3883d4b6ae7fa28f270d7780b526004e1e85e6615e31cb\": container with ID starting with e8934134ac3ea83bdf3883d4b6ae7fa28f270d7780b526004e1e85e6615e31cb not found: ID does not exist" Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.207237 4797 scope.go:117] "RemoveContainer" containerID="a45abd22da647533be3acc04031c750dc7693678dc2a0c0e4c4a7141de8e5bef" Feb 16 12:00:08 crc kubenswrapper[4797]: E0216 12:00:08.207850 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a45abd22da647533be3acc04031c750dc7693678dc2a0c0e4c4a7141de8e5bef\": container with ID starting with a45abd22da647533be3acc04031c750dc7693678dc2a0c0e4c4a7141de8e5bef not found: ID does not exist" containerID="a45abd22da647533be3acc04031c750dc7693678dc2a0c0e4c4a7141de8e5bef" Feb 16 12:00:08 crc kubenswrapper[4797]: I0216 12:00:08.207880 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a45abd22da647533be3acc04031c750dc7693678dc2a0c0e4c4a7141de8e5bef"} err="failed to get container status \"a45abd22da647533be3acc04031c750dc7693678dc2a0c0e4c4a7141de8e5bef\": rpc error: code = NotFound desc = could not find container \"a45abd22da647533be3acc04031c750dc7693678dc2a0c0e4c4a7141de8e5bef\": container with ID starting with a45abd22da647533be3acc04031c750dc7693678dc2a0c0e4c4a7141de8e5bef not found: ID does not exist" Feb 16 12:00:10 crc kubenswrapper[4797]: I0216 12:00:10.010815 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80aae4f7-4253-40ec-918f-6b67b6439e4d" path="/var/lib/kubelet/pods/80aae4f7-4253-40ec-918f-6b67b6439e4d/volumes" Feb 16 12:00:10 crc kubenswrapper[4797]: I0216 12:00:10.983218 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:00:10 crc kubenswrapper[4797]: E0216 12:00:10.983449 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:00:12 crc kubenswrapper[4797]: E0216 12:00:12.986251 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:00:24 crc kubenswrapper[4797]: I0216 12:00:24.982860 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:00:24 crc kubenswrapper[4797]: E0216 12:00:24.983869 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:00:26 crc kubenswrapper[4797]: E0216 12:00:26.000412 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:00:36 crc kubenswrapper[4797]: I0216 12:00:36.982506 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:00:36 crc kubenswrapper[4797]: E0216 12:00:36.983323 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.125205 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s5m8f"] Feb 16 12:00:38 crc kubenswrapper[4797]: E0216 12:00:38.126096 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80aae4f7-4253-40ec-918f-6b67b6439e4d" containerName="registry-server" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.126126 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="80aae4f7-4253-40ec-918f-6b67b6439e4d" containerName="registry-server" Feb 16 12:00:38 crc kubenswrapper[4797]: E0216 12:00:38.126180 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80aae4f7-4253-40ec-918f-6b67b6439e4d" containerName="extract-utilities" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.126197 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="80aae4f7-4253-40ec-918f-6b67b6439e4d" containerName="extract-utilities" Feb 16 12:00:38 crc kubenswrapper[4797]: E0216 12:00:38.126223 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80aae4f7-4253-40ec-918f-6b67b6439e4d" containerName="extract-content" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.126238 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="80aae4f7-4253-40ec-918f-6b67b6439e4d" containerName="extract-content" Feb 16 12:00:38 crc kubenswrapper[4797]: E0216 12:00:38.126261 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f3dde03-7d95-40f7-94d3-d94bafa6e551" containerName="collect-profiles" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.126272 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f3dde03-7d95-40f7-94d3-d94bafa6e551" containerName="collect-profiles" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.126642 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f3dde03-7d95-40f7-94d3-d94bafa6e551" containerName="collect-profiles" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.126691 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="80aae4f7-4253-40ec-918f-6b67b6439e4d" containerName="registry-server" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.129219 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.154881 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5m8f"] Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.168474 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmnpj\" (UniqueName: \"kubernetes.io/projected/ef62f656-527c-4837-80df-375b56517c8e-kube-api-access-kmnpj\") pod \"redhat-operators-s5m8f\" (UID: \"ef62f656-527c-4837-80df-375b56517c8e\") " pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.168682 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef62f656-527c-4837-80df-375b56517c8e-utilities\") pod \"redhat-operators-s5m8f\" (UID: \"ef62f656-527c-4837-80df-375b56517c8e\") " pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.168876 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef62f656-527c-4837-80df-375b56517c8e-catalog-content\") pod \"redhat-operators-s5m8f\" (UID: \"ef62f656-527c-4837-80df-375b56517c8e\") " pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.270362 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef62f656-527c-4837-80df-375b56517c8e-catalog-content\") pod \"redhat-operators-s5m8f\" (UID: \"ef62f656-527c-4837-80df-375b56517c8e\") " pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.270779 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmnpj\" (UniqueName: \"kubernetes.io/projected/ef62f656-527c-4837-80df-375b56517c8e-kube-api-access-kmnpj\") pod \"redhat-operators-s5m8f\" (UID: \"ef62f656-527c-4837-80df-375b56517c8e\") " pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.270856 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef62f656-527c-4837-80df-375b56517c8e-utilities\") pod \"redhat-operators-s5m8f\" (UID: \"ef62f656-527c-4837-80df-375b56517c8e\") " pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.270997 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef62f656-527c-4837-80df-375b56517c8e-catalog-content\") pod \"redhat-operators-s5m8f\" (UID: \"ef62f656-527c-4837-80df-375b56517c8e\") " pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.271280 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef62f656-527c-4837-80df-375b56517c8e-utilities\") pod \"redhat-operators-s5m8f\" (UID: \"ef62f656-527c-4837-80df-375b56517c8e\") " pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.297479 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmnpj\" (UniqueName: \"kubernetes.io/projected/ef62f656-527c-4837-80df-375b56517c8e-kube-api-access-kmnpj\") pod \"redhat-operators-s5m8f\" (UID: \"ef62f656-527c-4837-80df-375b56517c8e\") " pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.319540 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wsjkl"] Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.321498 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.334116 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wsjkl"] Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.373169 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/412f0c19-7637-4a33-89a0-8ec3222bcf77-utilities\") pod \"certified-operators-wsjkl\" (UID: \"412f0c19-7637-4a33-89a0-8ec3222bcf77\") " pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.373223 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsjjn\" (UniqueName: \"kubernetes.io/projected/412f0c19-7637-4a33-89a0-8ec3222bcf77-kube-api-access-qsjjn\") pod \"certified-operators-wsjkl\" (UID: \"412f0c19-7637-4a33-89a0-8ec3222bcf77\") " pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.373427 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/412f0c19-7637-4a33-89a0-8ec3222bcf77-catalog-content\") pod \"certified-operators-wsjkl\" (UID: \"412f0c19-7637-4a33-89a0-8ec3222bcf77\") " pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.457515 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.475622 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/412f0c19-7637-4a33-89a0-8ec3222bcf77-catalog-content\") pod \"certified-operators-wsjkl\" (UID: \"412f0c19-7637-4a33-89a0-8ec3222bcf77\") " pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.475721 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/412f0c19-7637-4a33-89a0-8ec3222bcf77-utilities\") pod \"certified-operators-wsjkl\" (UID: \"412f0c19-7637-4a33-89a0-8ec3222bcf77\") " pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.475740 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsjjn\" (UniqueName: \"kubernetes.io/projected/412f0c19-7637-4a33-89a0-8ec3222bcf77-kube-api-access-qsjjn\") pod \"certified-operators-wsjkl\" (UID: \"412f0c19-7637-4a33-89a0-8ec3222bcf77\") " pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.476811 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/412f0c19-7637-4a33-89a0-8ec3222bcf77-catalog-content\") pod \"certified-operators-wsjkl\" (UID: \"412f0c19-7637-4a33-89a0-8ec3222bcf77\") " pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.476964 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/412f0c19-7637-4a33-89a0-8ec3222bcf77-utilities\") pod \"certified-operators-wsjkl\" (UID: \"412f0c19-7637-4a33-89a0-8ec3222bcf77\") " pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.494402 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsjjn\" (UniqueName: \"kubernetes.io/projected/412f0c19-7637-4a33-89a0-8ec3222bcf77-kube-api-access-qsjjn\") pod \"certified-operators-wsjkl\" (UID: \"412f0c19-7637-4a33-89a0-8ec3222bcf77\") " pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.707317 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.837064 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s5m8f"] Feb 16 12:00:38 crc kubenswrapper[4797]: I0216 12:00:38.901415 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5m8f" event={"ID":"ef62f656-527c-4837-80df-375b56517c8e","Type":"ContainerStarted","Data":"332e85745e8f581fa94eb9324bece5d79a2f5974100bcf105f20430c506468ad"} Feb 16 12:00:39 crc kubenswrapper[4797]: I0216 12:00:39.252186 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wsjkl"] Feb 16 12:00:39 crc kubenswrapper[4797]: W0216 12:00:39.272503 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod412f0c19_7637_4a33_89a0_8ec3222bcf77.slice/crio-35a60d672f0deb5536d7a4d1693785683d3cd11ad64b2d56d6786da182bb6f75 WatchSource:0}: Error finding container 35a60d672f0deb5536d7a4d1693785683d3cd11ad64b2d56d6786da182bb6f75: Status 404 returned error can't find the container with id 35a60d672f0deb5536d7a4d1693785683d3cd11ad64b2d56d6786da182bb6f75 Feb 16 12:00:39 crc kubenswrapper[4797]: I0216 12:00:39.918903 4797 generic.go:334] "Generic (PLEG): container finished" podID="ef62f656-527c-4837-80df-375b56517c8e" containerID="9818f90c70d481d73b95593f1015b767cc17b87f57334447f45ac58761c74a0b" exitCode=0 Feb 16 12:00:39 crc kubenswrapper[4797]: I0216 12:00:39.919001 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5m8f" event={"ID":"ef62f656-527c-4837-80df-375b56517c8e","Type":"ContainerDied","Data":"9818f90c70d481d73b95593f1015b767cc17b87f57334447f45ac58761c74a0b"} Feb 16 12:00:39 crc kubenswrapper[4797]: I0216 12:00:39.923478 4797 generic.go:334] "Generic (PLEG): container finished" podID="412f0c19-7637-4a33-89a0-8ec3222bcf77" containerID="a78683fbb54fb9f6a00ae94d78871f202f146a3d80a155248fac0f271aa34bde" exitCode=0 Feb 16 12:00:39 crc kubenswrapper[4797]: I0216 12:00:39.923513 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsjkl" event={"ID":"412f0c19-7637-4a33-89a0-8ec3222bcf77","Type":"ContainerDied","Data":"a78683fbb54fb9f6a00ae94d78871f202f146a3d80a155248fac0f271aa34bde"} Feb 16 12:00:39 crc kubenswrapper[4797]: I0216 12:00:39.923532 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsjkl" event={"ID":"412f0c19-7637-4a33-89a0-8ec3222bcf77","Type":"ContainerStarted","Data":"35a60d672f0deb5536d7a4d1693785683d3cd11ad64b2d56d6786da182bb6f75"} Feb 16 12:00:39 crc kubenswrapper[4797]: E0216 12:00:39.986456 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:00:40 crc kubenswrapper[4797]: I0216 12:00:40.939871 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsjkl" event={"ID":"412f0c19-7637-4a33-89a0-8ec3222bcf77","Type":"ContainerStarted","Data":"50a359884bd0ea8717cf1d5431ce33f3efa45daf9da96fa2152639c09a81000f"} Feb 16 12:00:41 crc kubenswrapper[4797]: I0216 12:00:41.958054 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5m8f" event={"ID":"ef62f656-527c-4837-80df-375b56517c8e","Type":"ContainerStarted","Data":"d7201afcc9f09579b215b37247e3c78024d5917716bb08ebf27e6bc9b166f2ed"} Feb 16 12:00:42 crc kubenswrapper[4797]: I0216 12:00:42.970284 4797 generic.go:334] "Generic (PLEG): container finished" podID="412f0c19-7637-4a33-89a0-8ec3222bcf77" containerID="50a359884bd0ea8717cf1d5431ce33f3efa45daf9da96fa2152639c09a81000f" exitCode=0 Feb 16 12:00:42 crc kubenswrapper[4797]: I0216 12:00:42.970903 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsjkl" event={"ID":"412f0c19-7637-4a33-89a0-8ec3222bcf77","Type":"ContainerDied","Data":"50a359884bd0ea8717cf1d5431ce33f3efa45daf9da96fa2152639c09a81000f"} Feb 16 12:00:44 crc kubenswrapper[4797]: I0216 12:00:44.021922 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsjkl" event={"ID":"412f0c19-7637-4a33-89a0-8ec3222bcf77","Type":"ContainerStarted","Data":"ae95b230245939d645f45cbc29f2e444ddd9ad180c180797f236486df34d01bf"} Feb 16 12:00:44 crc kubenswrapper[4797]: I0216 12:00:44.048409 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wsjkl" podStartSLOduration=2.395317878 podStartE2EDuration="6.048390622s" podCreationTimestamp="2026-02-16 12:00:38 +0000 UTC" firstStartedPulling="2026-02-16 12:00:39.924498736 +0000 UTC m=+3234.644683716" lastFinishedPulling="2026-02-16 12:00:43.57757144 +0000 UTC m=+3238.297756460" observedRunningTime="2026-02-16 12:00:44.046415129 +0000 UTC m=+3238.766600149" watchObservedRunningTime="2026-02-16 12:00:44.048390622 +0000 UTC m=+3238.768575602" Feb 16 12:00:47 crc kubenswrapper[4797]: I0216 12:00:47.022946 4797 generic.go:334] "Generic (PLEG): container finished" podID="f01e0079-0175-4188-a990-79451d57b8d0" containerID="55ffd266fd5679be314ec1e754883cc68fd5eef87342c0d50fcf95af589d0369" exitCode=0 Feb 16 12:00:47 crc kubenswrapper[4797]: I0216 12:00:47.023009 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-mf9vn/must-gather-mshdc" event={"ID":"f01e0079-0175-4188-a990-79451d57b8d0","Type":"ContainerDied","Data":"55ffd266fd5679be314ec1e754883cc68fd5eef87342c0d50fcf95af589d0369"} Feb 16 12:00:47 crc kubenswrapper[4797]: I0216 12:00:47.024732 4797 scope.go:117] "RemoveContainer" containerID="55ffd266fd5679be314ec1e754883cc68fd5eef87342c0d50fcf95af589d0369" Feb 16 12:00:47 crc kubenswrapper[4797]: I0216 12:00:47.030019 4797 generic.go:334] "Generic (PLEG): container finished" podID="ef62f656-527c-4837-80df-375b56517c8e" containerID="d7201afcc9f09579b215b37247e3c78024d5917716bb08ebf27e6bc9b166f2ed" exitCode=0 Feb 16 12:00:47 crc kubenswrapper[4797]: I0216 12:00:47.030077 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5m8f" event={"ID":"ef62f656-527c-4837-80df-375b56517c8e","Type":"ContainerDied","Data":"d7201afcc9f09579b215b37247e3c78024d5917716bb08ebf27e6bc9b166f2ed"} Feb 16 12:00:47 crc kubenswrapper[4797]: I0216 12:00:47.425107 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mf9vn_must-gather-mshdc_f01e0079-0175-4188-a990-79451d57b8d0/gather/0.log" Feb 16 12:00:48 crc kubenswrapper[4797]: I0216 12:00:48.042607 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5m8f" event={"ID":"ef62f656-527c-4837-80df-375b56517c8e","Type":"ContainerStarted","Data":"ed46dbd324ed695824873efc099785d3600b3142fb4706370ac30a68753628e9"} Feb 16 12:00:48 crc kubenswrapper[4797]: I0216 12:00:48.075768 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s5m8f" podStartSLOduration=2.513460356 podStartE2EDuration="10.075744118s" podCreationTimestamp="2026-02-16 12:00:38 +0000 UTC" firstStartedPulling="2026-02-16 12:00:39.922838161 +0000 UTC m=+3234.643023141" lastFinishedPulling="2026-02-16 12:00:47.485121923 +0000 UTC m=+3242.205306903" observedRunningTime="2026-02-16 12:00:48.068128282 +0000 UTC m=+3242.788313262" watchObservedRunningTime="2026-02-16 12:00:48.075744118 +0000 UTC m=+3242.795929098" Feb 16 12:00:48 crc kubenswrapper[4797]: I0216 12:00:48.458337 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:48 crc kubenswrapper[4797]: I0216 12:00:48.458463 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:48 crc kubenswrapper[4797]: I0216 12:00:48.707679 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:48 crc kubenswrapper[4797]: I0216 12:00:48.709037 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:49 crc kubenswrapper[4797]: I0216 12:00:49.512558 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s5m8f" podUID="ef62f656-527c-4837-80df-375b56517c8e" containerName="registry-server" probeResult="failure" output=< Feb 16 12:00:49 crc kubenswrapper[4797]: timeout: failed to connect service ":50051" within 1s Feb 16 12:00:49 crc kubenswrapper[4797]: > Feb 16 12:00:49 crc kubenswrapper[4797]: I0216 12:00:49.759508 4797 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wsjkl" podUID="412f0c19-7637-4a33-89a0-8ec3222bcf77" containerName="registry-server" probeResult="failure" output=< Feb 16 12:00:49 crc kubenswrapper[4797]: timeout: failed to connect service ":50051" within 1s Feb 16 12:00:49 crc kubenswrapper[4797]: > Feb 16 12:00:49 crc kubenswrapper[4797]: I0216 12:00:49.982629 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:00:49 crc kubenswrapper[4797]: E0216 12:00:49.982893 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:00:52 crc kubenswrapper[4797]: E0216 12:00:52.985528 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:00:55 crc kubenswrapper[4797]: I0216 12:00:55.802641 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-mf9vn/must-gather-mshdc"] Feb 16 12:00:55 crc kubenswrapper[4797]: I0216 12:00:55.803392 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-mf9vn/must-gather-mshdc" podUID="f01e0079-0175-4188-a990-79451d57b8d0" containerName="copy" containerID="cri-o://fd1ac09bdabd3dab27fa2a51913dc32279adaeb3010191059475e89ccfee4030" gracePeriod=2 Feb 16 12:00:55 crc kubenswrapper[4797]: I0216 12:00:55.816840 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-mf9vn/must-gather-mshdc"] Feb 16 12:00:56 crc kubenswrapper[4797]: I0216 12:00:56.151872 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mf9vn_must-gather-mshdc_f01e0079-0175-4188-a990-79451d57b8d0/copy/0.log" Feb 16 12:00:56 crc kubenswrapper[4797]: I0216 12:00:56.153909 4797 generic.go:334] "Generic (PLEG): container finished" podID="f01e0079-0175-4188-a990-79451d57b8d0" containerID="fd1ac09bdabd3dab27fa2a51913dc32279adaeb3010191059475e89ccfee4030" exitCode=143 Feb 16 12:00:56 crc kubenswrapper[4797]: I0216 12:00:56.381979 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mf9vn_must-gather-mshdc_f01e0079-0175-4188-a990-79451d57b8d0/copy/0.log" Feb 16 12:00:56 crc kubenswrapper[4797]: I0216 12:00:56.382830 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mf9vn/must-gather-mshdc" Feb 16 12:00:56 crc kubenswrapper[4797]: I0216 12:00:56.503501 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f01e0079-0175-4188-a990-79451d57b8d0-must-gather-output\") pod \"f01e0079-0175-4188-a990-79451d57b8d0\" (UID: \"f01e0079-0175-4188-a990-79451d57b8d0\") " Feb 16 12:00:56 crc kubenswrapper[4797]: I0216 12:00:56.503851 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbqfh\" (UniqueName: \"kubernetes.io/projected/f01e0079-0175-4188-a990-79451d57b8d0-kube-api-access-bbqfh\") pod \"f01e0079-0175-4188-a990-79451d57b8d0\" (UID: \"f01e0079-0175-4188-a990-79451d57b8d0\") " Feb 16 12:00:56 crc kubenswrapper[4797]: I0216 12:00:56.509891 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f01e0079-0175-4188-a990-79451d57b8d0-kube-api-access-bbqfh" (OuterVolumeSpecName: "kube-api-access-bbqfh") pod "f01e0079-0175-4188-a990-79451d57b8d0" (UID: "f01e0079-0175-4188-a990-79451d57b8d0"). InnerVolumeSpecName "kube-api-access-bbqfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:00:56 crc kubenswrapper[4797]: I0216 12:00:56.607093 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbqfh\" (UniqueName: \"kubernetes.io/projected/f01e0079-0175-4188-a990-79451d57b8d0-kube-api-access-bbqfh\") on node \"crc\" DevicePath \"\"" Feb 16 12:00:56 crc kubenswrapper[4797]: I0216 12:00:56.634867 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f01e0079-0175-4188-a990-79451d57b8d0-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f01e0079-0175-4188-a990-79451d57b8d0" (UID: "f01e0079-0175-4188-a990-79451d57b8d0"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:00:56 crc kubenswrapper[4797]: I0216 12:00:56.711139 4797 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f01e0079-0175-4188-a990-79451d57b8d0-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 12:00:57 crc kubenswrapper[4797]: I0216 12:00:57.164518 4797 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-mf9vn_must-gather-mshdc_f01e0079-0175-4188-a990-79451d57b8d0/copy/0.log" Feb 16 12:00:57 crc kubenswrapper[4797]: I0216 12:00:57.165147 4797 scope.go:117] "RemoveContainer" containerID="fd1ac09bdabd3dab27fa2a51913dc32279adaeb3010191059475e89ccfee4030" Feb 16 12:00:57 crc kubenswrapper[4797]: I0216 12:00:57.165211 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-mf9vn/must-gather-mshdc" Feb 16 12:00:57 crc kubenswrapper[4797]: I0216 12:00:57.217681 4797 scope.go:117] "RemoveContainer" containerID="55ffd266fd5679be314ec1e754883cc68fd5eef87342c0d50fcf95af589d0369" Feb 16 12:00:58 crc kubenswrapper[4797]: I0216 12:00:58.005625 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f01e0079-0175-4188-a990-79451d57b8d0" path="/var/lib/kubelet/pods/f01e0079-0175-4188-a990-79451d57b8d0/volumes" Feb 16 12:00:58 crc kubenswrapper[4797]: I0216 12:00:58.507832 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:58 crc kubenswrapper[4797]: I0216 12:00:58.559778 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:00:58 crc kubenswrapper[4797]: I0216 12:00:58.745899 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5m8f"] Feb 16 12:00:58 crc kubenswrapper[4797]: I0216 12:00:58.752644 4797 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:00:58 crc kubenswrapper[4797]: I0216 12:00:58.797169 4797 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.148171 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29520721-rv72c"] Feb 16 12:01:00 crc kubenswrapper[4797]: E0216 12:01:00.148999 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01e0079-0175-4188-a990-79451d57b8d0" containerName="copy" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.149014 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01e0079-0175-4188-a990-79451d57b8d0" containerName="copy" Feb 16 12:01:00 crc kubenswrapper[4797]: E0216 12:01:00.149045 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f01e0079-0175-4188-a990-79451d57b8d0" containerName="gather" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.149051 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="f01e0079-0175-4188-a990-79451d57b8d0" containerName="gather" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.149252 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01e0079-0175-4188-a990-79451d57b8d0" containerName="gather" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.149280 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="f01e0079-0175-4188-a990-79451d57b8d0" containerName="copy" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.149975 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.168843 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520721-rv72c"] Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.185625 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-fernet-keys\") pod \"keystone-cron-29520721-rv72c\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.185686 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-combined-ca-bundle\") pod \"keystone-cron-29520721-rv72c\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.185827 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htcbd\" (UniqueName: \"kubernetes.io/projected/0c129993-5a9a-4dca-9185-5e1e33f77c7b-kube-api-access-htcbd\") pod \"keystone-cron-29520721-rv72c\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.185862 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-config-data\") pod \"keystone-cron-29520721-rv72c\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.227898 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s5m8f" podUID="ef62f656-527c-4837-80df-375b56517c8e" containerName="registry-server" containerID="cri-o://ed46dbd324ed695824873efc099785d3600b3142fb4706370ac30a68753628e9" gracePeriod=2 Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.288006 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-fernet-keys\") pod \"keystone-cron-29520721-rv72c\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.288372 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-combined-ca-bundle\") pod \"keystone-cron-29520721-rv72c\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.288466 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htcbd\" (UniqueName: \"kubernetes.io/projected/0c129993-5a9a-4dca-9185-5e1e33f77c7b-kube-api-access-htcbd\") pod \"keystone-cron-29520721-rv72c\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.288492 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-config-data\") pod \"keystone-cron-29520721-rv72c\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.295541 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-combined-ca-bundle\") pod \"keystone-cron-29520721-rv72c\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.296303 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-fernet-keys\") pod \"keystone-cron-29520721-rv72c\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.301679 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-config-data\") pod \"keystone-cron-29520721-rv72c\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.309474 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htcbd\" (UniqueName: \"kubernetes.io/projected/0c129993-5a9a-4dca-9185-5e1e33f77c7b-kube-api-access-htcbd\") pod \"keystone-cron-29520721-rv72c\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.481080 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.847974 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:01:00 crc kubenswrapper[4797]: I0216 12:01:00.983308 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:01:00 crc kubenswrapper[4797]: E0216 12:01:00.984343 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.008932 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef62f656-527c-4837-80df-375b56517c8e-catalog-content\") pod \"ef62f656-527c-4837-80df-375b56517c8e\" (UID: \"ef62f656-527c-4837-80df-375b56517c8e\") " Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.009116 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef62f656-527c-4837-80df-375b56517c8e-utilities\") pod \"ef62f656-527c-4837-80df-375b56517c8e\" (UID: \"ef62f656-527c-4837-80df-375b56517c8e\") " Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.009137 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmnpj\" (UniqueName: \"kubernetes.io/projected/ef62f656-527c-4837-80df-375b56517c8e-kube-api-access-kmnpj\") pod \"ef62f656-527c-4837-80df-375b56517c8e\" (UID: \"ef62f656-527c-4837-80df-375b56517c8e\") " Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.010009 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef62f656-527c-4837-80df-375b56517c8e-utilities" (OuterVolumeSpecName: "utilities") pod "ef62f656-527c-4837-80df-375b56517c8e" (UID: "ef62f656-527c-4837-80df-375b56517c8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.015209 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef62f656-527c-4837-80df-375b56517c8e-kube-api-access-kmnpj" (OuterVolumeSpecName: "kube-api-access-kmnpj") pod "ef62f656-527c-4837-80df-375b56517c8e" (UID: "ef62f656-527c-4837-80df-375b56517c8e"). InnerVolumeSpecName "kube-api-access-kmnpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.112043 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef62f656-527c-4837-80df-375b56517c8e-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.112086 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmnpj\" (UniqueName: \"kubernetes.io/projected/ef62f656-527c-4837-80df-375b56517c8e-kube-api-access-kmnpj\") on node \"crc\" DevicePath \"\"" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.119715 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29520721-rv72c"] Feb 16 12:01:01 crc kubenswrapper[4797]: W0216 12:01:01.127898 4797 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c129993_5a9a_4dca_9185_5e1e33f77c7b.slice/crio-b11bce9ded53ffa7acc3a53c12520acbcdfc03121986213ceb733a734bdca9dc WatchSource:0}: Error finding container b11bce9ded53ffa7acc3a53c12520acbcdfc03121986213ceb733a734bdca9dc: Status 404 returned error can't find the container with id b11bce9ded53ffa7acc3a53c12520acbcdfc03121986213ceb733a734bdca9dc Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.128401 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef62f656-527c-4837-80df-375b56517c8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef62f656-527c-4837-80df-375b56517c8e" (UID: "ef62f656-527c-4837-80df-375b56517c8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.158604 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wsjkl"] Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.158906 4797 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wsjkl" podUID="412f0c19-7637-4a33-89a0-8ec3222bcf77" containerName="registry-server" containerID="cri-o://ae95b230245939d645f45cbc29f2e444ddd9ad180c180797f236486df34d01bf" gracePeriod=2 Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.214418 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef62f656-527c-4837-80df-375b56517c8e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.245398 4797 generic.go:334] "Generic (PLEG): container finished" podID="ef62f656-527c-4837-80df-375b56517c8e" containerID="ed46dbd324ed695824873efc099785d3600b3142fb4706370ac30a68753628e9" exitCode=0 Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.245513 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5m8f" event={"ID":"ef62f656-527c-4837-80df-375b56517c8e","Type":"ContainerDied","Data":"ed46dbd324ed695824873efc099785d3600b3142fb4706370ac30a68753628e9"} Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.245568 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s5m8f" event={"ID":"ef62f656-527c-4837-80df-375b56517c8e","Type":"ContainerDied","Data":"332e85745e8f581fa94eb9324bece5d79a2f5974100bcf105f20430c506468ad"} Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.245702 4797 scope.go:117] "RemoveContainer" containerID="ed46dbd324ed695824873efc099785d3600b3142fb4706370ac30a68753628e9" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.245812 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s5m8f" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.248206 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520721-rv72c" event={"ID":"0c129993-5a9a-4dca-9185-5e1e33f77c7b","Type":"ContainerStarted","Data":"b11bce9ded53ffa7acc3a53c12520acbcdfc03121986213ceb733a734bdca9dc"} Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.306820 4797 scope.go:117] "RemoveContainer" containerID="d7201afcc9f09579b215b37247e3c78024d5917716bb08ebf27e6bc9b166f2ed" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.339228 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s5m8f"] Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.351983 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s5m8f"] Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.357912 4797 scope.go:117] "RemoveContainer" containerID="9818f90c70d481d73b95593f1015b767cc17b87f57334447f45ac58761c74a0b" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.409012 4797 scope.go:117] "RemoveContainer" containerID="ed46dbd324ed695824873efc099785d3600b3142fb4706370ac30a68753628e9" Feb 16 12:01:01 crc kubenswrapper[4797]: E0216 12:01:01.416764 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed46dbd324ed695824873efc099785d3600b3142fb4706370ac30a68753628e9\": container with ID starting with ed46dbd324ed695824873efc099785d3600b3142fb4706370ac30a68753628e9 not found: ID does not exist" containerID="ed46dbd324ed695824873efc099785d3600b3142fb4706370ac30a68753628e9" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.416820 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed46dbd324ed695824873efc099785d3600b3142fb4706370ac30a68753628e9"} err="failed to get container status \"ed46dbd324ed695824873efc099785d3600b3142fb4706370ac30a68753628e9\": rpc error: code = NotFound desc = could not find container \"ed46dbd324ed695824873efc099785d3600b3142fb4706370ac30a68753628e9\": container with ID starting with ed46dbd324ed695824873efc099785d3600b3142fb4706370ac30a68753628e9 not found: ID does not exist" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.416849 4797 scope.go:117] "RemoveContainer" containerID="d7201afcc9f09579b215b37247e3c78024d5917716bb08ebf27e6bc9b166f2ed" Feb 16 12:01:01 crc kubenswrapper[4797]: E0216 12:01:01.420036 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7201afcc9f09579b215b37247e3c78024d5917716bb08ebf27e6bc9b166f2ed\": container with ID starting with d7201afcc9f09579b215b37247e3c78024d5917716bb08ebf27e6bc9b166f2ed not found: ID does not exist" containerID="d7201afcc9f09579b215b37247e3c78024d5917716bb08ebf27e6bc9b166f2ed" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.420090 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7201afcc9f09579b215b37247e3c78024d5917716bb08ebf27e6bc9b166f2ed"} err="failed to get container status \"d7201afcc9f09579b215b37247e3c78024d5917716bb08ebf27e6bc9b166f2ed\": rpc error: code = NotFound desc = could not find container \"d7201afcc9f09579b215b37247e3c78024d5917716bb08ebf27e6bc9b166f2ed\": container with ID starting with d7201afcc9f09579b215b37247e3c78024d5917716bb08ebf27e6bc9b166f2ed not found: ID does not exist" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.420131 4797 scope.go:117] "RemoveContainer" containerID="9818f90c70d481d73b95593f1015b767cc17b87f57334447f45ac58761c74a0b" Feb 16 12:01:01 crc kubenswrapper[4797]: E0216 12:01:01.422772 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9818f90c70d481d73b95593f1015b767cc17b87f57334447f45ac58761c74a0b\": container with ID starting with 9818f90c70d481d73b95593f1015b767cc17b87f57334447f45ac58761c74a0b not found: ID does not exist" containerID="9818f90c70d481d73b95593f1015b767cc17b87f57334447f45ac58761c74a0b" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.422811 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9818f90c70d481d73b95593f1015b767cc17b87f57334447f45ac58761c74a0b"} err="failed to get container status \"9818f90c70d481d73b95593f1015b767cc17b87f57334447f45ac58761c74a0b\": rpc error: code = NotFound desc = could not find container \"9818f90c70d481d73b95593f1015b767cc17b87f57334447f45ac58761c74a0b\": container with ID starting with 9818f90c70d481d73b95593f1015b767cc17b87f57334447f45ac58761c74a0b not found: ID does not exist" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.795934 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.931853 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/412f0c19-7637-4a33-89a0-8ec3222bcf77-catalog-content\") pod \"412f0c19-7637-4a33-89a0-8ec3222bcf77\" (UID: \"412f0c19-7637-4a33-89a0-8ec3222bcf77\") " Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.932211 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/412f0c19-7637-4a33-89a0-8ec3222bcf77-utilities\") pod \"412f0c19-7637-4a33-89a0-8ec3222bcf77\" (UID: \"412f0c19-7637-4a33-89a0-8ec3222bcf77\") " Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.932358 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsjjn\" (UniqueName: \"kubernetes.io/projected/412f0c19-7637-4a33-89a0-8ec3222bcf77-kube-api-access-qsjjn\") pod \"412f0c19-7637-4a33-89a0-8ec3222bcf77\" (UID: \"412f0c19-7637-4a33-89a0-8ec3222bcf77\") " Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.932913 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/412f0c19-7637-4a33-89a0-8ec3222bcf77-utilities" (OuterVolumeSpecName: "utilities") pod "412f0c19-7637-4a33-89a0-8ec3222bcf77" (UID: "412f0c19-7637-4a33-89a0-8ec3222bcf77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.933818 4797 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/412f0c19-7637-4a33-89a0-8ec3222bcf77-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.938241 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/412f0c19-7637-4a33-89a0-8ec3222bcf77-kube-api-access-qsjjn" (OuterVolumeSpecName: "kube-api-access-qsjjn") pod "412f0c19-7637-4a33-89a0-8ec3222bcf77" (UID: "412f0c19-7637-4a33-89a0-8ec3222bcf77"). InnerVolumeSpecName "kube-api-access-qsjjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.990054 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/412f0c19-7637-4a33-89a0-8ec3222bcf77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "412f0c19-7637-4a33-89a0-8ec3222bcf77" (UID: "412f0c19-7637-4a33-89a0-8ec3222bcf77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 12:01:01 crc kubenswrapper[4797]: I0216 12:01:01.997717 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef62f656-527c-4837-80df-375b56517c8e" path="/var/lib/kubelet/pods/ef62f656-527c-4837-80df-375b56517c8e/volumes" Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.035450 4797 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/412f0c19-7637-4a33-89a0-8ec3222bcf77-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.035478 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsjjn\" (UniqueName: \"kubernetes.io/projected/412f0c19-7637-4a33-89a0-8ec3222bcf77-kube-api-access-qsjjn\") on node \"crc\" DevicePath \"\"" Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.261331 4797 generic.go:334] "Generic (PLEG): container finished" podID="412f0c19-7637-4a33-89a0-8ec3222bcf77" containerID="ae95b230245939d645f45cbc29f2e444ddd9ad180c180797f236486df34d01bf" exitCode=0 Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.261396 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsjkl" event={"ID":"412f0c19-7637-4a33-89a0-8ec3222bcf77","Type":"ContainerDied","Data":"ae95b230245939d645f45cbc29f2e444ddd9ad180c180797f236486df34d01bf"} Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.261419 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wsjkl" event={"ID":"412f0c19-7637-4a33-89a0-8ec3222bcf77","Type":"ContainerDied","Data":"35a60d672f0deb5536d7a4d1693785683d3cd11ad64b2d56d6786da182bb6f75"} Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.261439 4797 scope.go:117] "RemoveContainer" containerID="ae95b230245939d645f45cbc29f2e444ddd9ad180c180797f236486df34d01bf" Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.261547 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wsjkl" Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.269360 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520721-rv72c" event={"ID":"0c129993-5a9a-4dca-9185-5e1e33f77c7b","Type":"ContainerStarted","Data":"dff4c126c872b88ad3df3e22a6d34cb9896bedb6035e18699eb294b7d890f063"} Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.288378 4797 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wsjkl"] Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.301960 4797 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wsjkl"] Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.305954 4797 scope.go:117] "RemoveContainer" containerID="50a359884bd0ea8717cf1d5431ce33f3efa45daf9da96fa2152639c09a81000f" Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.308807 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29520721-rv72c" podStartSLOduration=2.308771735 podStartE2EDuration="2.308771735s" podCreationTimestamp="2026-02-16 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 12:01:02.306214537 +0000 UTC m=+3257.026399547" watchObservedRunningTime="2026-02-16 12:01:02.308771735 +0000 UTC m=+3257.028956705" Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.336046 4797 scope.go:117] "RemoveContainer" containerID="a78683fbb54fb9f6a00ae94d78871f202f146a3d80a155248fac0f271aa34bde" Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.394726 4797 scope.go:117] "RemoveContainer" containerID="ae95b230245939d645f45cbc29f2e444ddd9ad180c180797f236486df34d01bf" Feb 16 12:01:02 crc kubenswrapper[4797]: E0216 12:01:02.395143 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae95b230245939d645f45cbc29f2e444ddd9ad180c180797f236486df34d01bf\": container with ID starting with ae95b230245939d645f45cbc29f2e444ddd9ad180c180797f236486df34d01bf not found: ID does not exist" containerID="ae95b230245939d645f45cbc29f2e444ddd9ad180c180797f236486df34d01bf" Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.395175 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae95b230245939d645f45cbc29f2e444ddd9ad180c180797f236486df34d01bf"} err="failed to get container status \"ae95b230245939d645f45cbc29f2e444ddd9ad180c180797f236486df34d01bf\": rpc error: code = NotFound desc = could not find container \"ae95b230245939d645f45cbc29f2e444ddd9ad180c180797f236486df34d01bf\": container with ID starting with ae95b230245939d645f45cbc29f2e444ddd9ad180c180797f236486df34d01bf not found: ID does not exist" Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.395197 4797 scope.go:117] "RemoveContainer" containerID="50a359884bd0ea8717cf1d5431ce33f3efa45daf9da96fa2152639c09a81000f" Feb 16 12:01:02 crc kubenswrapper[4797]: E0216 12:01:02.395730 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50a359884bd0ea8717cf1d5431ce33f3efa45daf9da96fa2152639c09a81000f\": container with ID starting with 50a359884bd0ea8717cf1d5431ce33f3efa45daf9da96fa2152639c09a81000f not found: ID does not exist" containerID="50a359884bd0ea8717cf1d5431ce33f3efa45daf9da96fa2152639c09a81000f" Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.395776 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50a359884bd0ea8717cf1d5431ce33f3efa45daf9da96fa2152639c09a81000f"} err="failed to get container status \"50a359884bd0ea8717cf1d5431ce33f3efa45daf9da96fa2152639c09a81000f\": rpc error: code = NotFound desc = could not find container \"50a359884bd0ea8717cf1d5431ce33f3efa45daf9da96fa2152639c09a81000f\": container with ID starting with 50a359884bd0ea8717cf1d5431ce33f3efa45daf9da96fa2152639c09a81000f not found: ID does not exist" Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.395809 4797 scope.go:117] "RemoveContainer" containerID="a78683fbb54fb9f6a00ae94d78871f202f146a3d80a155248fac0f271aa34bde" Feb 16 12:01:02 crc kubenswrapper[4797]: E0216 12:01:02.396393 4797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a78683fbb54fb9f6a00ae94d78871f202f146a3d80a155248fac0f271aa34bde\": container with ID starting with a78683fbb54fb9f6a00ae94d78871f202f146a3d80a155248fac0f271aa34bde not found: ID does not exist" containerID="a78683fbb54fb9f6a00ae94d78871f202f146a3d80a155248fac0f271aa34bde" Feb 16 12:01:02 crc kubenswrapper[4797]: I0216 12:01:02.396447 4797 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a78683fbb54fb9f6a00ae94d78871f202f146a3d80a155248fac0f271aa34bde"} err="failed to get container status \"a78683fbb54fb9f6a00ae94d78871f202f146a3d80a155248fac0f271aa34bde\": rpc error: code = NotFound desc = could not find container \"a78683fbb54fb9f6a00ae94d78871f202f146a3d80a155248fac0f271aa34bde\": container with ID starting with a78683fbb54fb9f6a00ae94d78871f202f146a3d80a155248fac0f271aa34bde not found: ID does not exist" Feb 16 12:01:03 crc kubenswrapper[4797]: I0216 12:01:03.994713 4797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="412f0c19-7637-4a33-89a0-8ec3222bcf77" path="/var/lib/kubelet/pods/412f0c19-7637-4a33-89a0-8ec3222bcf77/volumes" Feb 16 12:01:04 crc kubenswrapper[4797]: I0216 12:01:04.310840 4797 generic.go:334] "Generic (PLEG): container finished" podID="0c129993-5a9a-4dca-9185-5e1e33f77c7b" containerID="dff4c126c872b88ad3df3e22a6d34cb9896bedb6035e18699eb294b7d890f063" exitCode=0 Feb 16 12:01:04 crc kubenswrapper[4797]: I0216 12:01:04.310887 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520721-rv72c" event={"ID":"0c129993-5a9a-4dca-9185-5e1e33f77c7b","Type":"ContainerDied","Data":"dff4c126c872b88ad3df3e22a6d34cb9896bedb6035e18699eb294b7d890f063"} Feb 16 12:01:05 crc kubenswrapper[4797]: I0216 12:01:05.684430 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:05 crc kubenswrapper[4797]: I0216 12:01:05.814295 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htcbd\" (UniqueName: \"kubernetes.io/projected/0c129993-5a9a-4dca-9185-5e1e33f77c7b-kube-api-access-htcbd\") pod \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " Feb 16 12:01:05 crc kubenswrapper[4797]: I0216 12:01:05.814336 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-combined-ca-bundle\") pod \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " Feb 16 12:01:05 crc kubenswrapper[4797]: I0216 12:01:05.814494 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-config-data\") pod \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " Feb 16 12:01:05 crc kubenswrapper[4797]: I0216 12:01:05.814535 4797 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-fernet-keys\") pod \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\" (UID: \"0c129993-5a9a-4dca-9185-5e1e33f77c7b\") " Feb 16 12:01:05 crc kubenswrapper[4797]: I0216 12:01:05.821599 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0c129993-5a9a-4dca-9185-5e1e33f77c7b" (UID: "0c129993-5a9a-4dca-9185-5e1e33f77c7b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:01:05 crc kubenswrapper[4797]: I0216 12:01:05.821752 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c129993-5a9a-4dca-9185-5e1e33f77c7b-kube-api-access-htcbd" (OuterVolumeSpecName: "kube-api-access-htcbd") pod "0c129993-5a9a-4dca-9185-5e1e33f77c7b" (UID: "0c129993-5a9a-4dca-9185-5e1e33f77c7b"). InnerVolumeSpecName "kube-api-access-htcbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 12:01:05 crc kubenswrapper[4797]: I0216 12:01:05.841178 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c129993-5a9a-4dca-9185-5e1e33f77c7b" (UID: "0c129993-5a9a-4dca-9185-5e1e33f77c7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:01:05 crc kubenswrapper[4797]: I0216 12:01:05.877057 4797 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-config-data" (OuterVolumeSpecName: "config-data") pod "0c129993-5a9a-4dca-9185-5e1e33f77c7b" (UID: "0c129993-5a9a-4dca-9185-5e1e33f77c7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 12:01:05 crc kubenswrapper[4797]: I0216 12:01:05.919123 4797 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htcbd\" (UniqueName: \"kubernetes.io/projected/0c129993-5a9a-4dca-9185-5e1e33f77c7b-kube-api-access-htcbd\") on node \"crc\" DevicePath \"\"" Feb 16 12:01:05 crc kubenswrapper[4797]: I0216 12:01:05.919158 4797 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 12:01:05 crc kubenswrapper[4797]: I0216 12:01:05.919172 4797 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 12:01:05 crc kubenswrapper[4797]: I0216 12:01:05.919183 4797 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c129993-5a9a-4dca-9185-5e1e33f77c7b-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 12:01:06 crc kubenswrapper[4797]: I0216 12:01:06.329982 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29520721-rv72c" event={"ID":"0c129993-5a9a-4dca-9185-5e1e33f77c7b","Type":"ContainerDied","Data":"b11bce9ded53ffa7acc3a53c12520acbcdfc03121986213ceb733a734bdca9dc"} Feb 16 12:01:06 crc kubenswrapper[4797]: I0216 12:01:06.330036 4797 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b11bce9ded53ffa7acc3a53c12520acbcdfc03121986213ceb733a734bdca9dc" Feb 16 12:01:06 crc kubenswrapper[4797]: I0216 12:01:06.330088 4797 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29520721-rv72c" Feb 16 12:01:06 crc kubenswrapper[4797]: I0216 12:01:06.450759 4797 scope.go:117] "RemoveContainer" containerID="468096b6a7407820115bca316d2ab6ff55a1d0ab0f993fd374ec4893912a18db" Feb 16 12:01:06 crc kubenswrapper[4797]: E0216 12:01:06.986395 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:01:13 crc kubenswrapper[4797]: I0216 12:01:13.070900 4797 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-c2tk9" podUID="be2f5af9-52ca-4678-80c6-ad099ddbf8ff" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 12:01:13 crc kubenswrapper[4797]: I0216 12:01:13.075634 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-kjc2z" podUID="261bff34-cd36-4214-880f-231fa0f1679b" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 12:01:15 crc kubenswrapper[4797]: I0216 12:01:15.988379 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:01:15 crc kubenswrapper[4797]: E0216 12:01:15.989056 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:01:19 crc kubenswrapper[4797]: E0216 12:01:19.986123 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:01:29 crc kubenswrapper[4797]: I0216 12:01:29.983166 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:01:29 crc kubenswrapper[4797]: E0216 12:01:29.983952 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:01:32 crc kubenswrapper[4797]: E0216 12:01:32.985415 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:01:40 crc kubenswrapper[4797]: I0216 12:01:40.982715 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:01:40 crc kubenswrapper[4797]: E0216 12:01:40.983637 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:01:43 crc kubenswrapper[4797]: E0216 12:01:43.985476 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:01:52 crc kubenswrapper[4797]: I0216 12:01:52.984635 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:01:52 crc kubenswrapper[4797]: E0216 12:01:52.985849 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:01:54 crc kubenswrapper[4797]: I0216 12:01:54.984772 4797 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 12:01:55 crc kubenswrapper[4797]: E0216 12:01:55.115822 4797 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 12:01:55 crc kubenswrapper[4797]: E0216 12:01:55.116163 4797 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 12:01:55 crc kubenswrapper[4797]: E0216 12:01:55.116298 4797 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4fvxd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-dhgrw_openstack(895bed8d-c376-47ad-8fa6-3cf0f07399c0): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 12:01:55 crc kubenswrapper[4797]: E0216 12:01:55.117532 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:02:06 crc kubenswrapper[4797]: I0216 12:02:06.983871 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:02:06 crc kubenswrapper[4797]: E0216 12:02:06.984924 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:02:09 crc kubenswrapper[4797]: E0216 12:02:09.985814 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:02:17 crc kubenswrapper[4797]: I0216 12:02:17.984623 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:02:17 crc kubenswrapper[4797]: E0216 12:02:17.986103 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:02:21 crc kubenswrapper[4797]: E0216 12:02:21.986093 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:02:30 crc kubenswrapper[4797]: I0216 12:02:30.983152 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:02:30 crc kubenswrapper[4797]: E0216 12:02:30.984254 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:02:32 crc kubenswrapper[4797]: E0216 12:02:32.985046 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:02:41 crc kubenswrapper[4797]: I0216 12:02:41.982777 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:02:41 crc kubenswrapper[4797]: E0216 12:02:41.983493 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:02:45 crc kubenswrapper[4797]: E0216 12:02:45.996792 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:02:53 crc kubenswrapper[4797]: I0216 12:02:53.983034 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:02:53 crc kubenswrapper[4797]: E0216 12:02:53.983969 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:02:57 crc kubenswrapper[4797]: E0216 12:02:57.989845 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:03:04 crc kubenswrapper[4797]: I0216 12:03:04.983050 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:03:04 crc kubenswrapper[4797]: E0216 12:03:04.983825 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-lkgrl_openshift-machine-config-operator(128f4e85-fd17-4281-97d2-872fda792b21)\"" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" Feb 16 12:03:08 crc kubenswrapper[4797]: E0216 12:03:08.986903 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:03:18 crc kubenswrapper[4797]: I0216 12:03:18.983857 4797 scope.go:117] "RemoveContainer" containerID="3cdddd3cbae48a92c9c3ea45964ffbbee4fd749c2b7d7338bb623a03a2b44daa" Feb 16 12:03:19 crc kubenswrapper[4797]: I0216 12:03:19.648179 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" event={"ID":"128f4e85-fd17-4281-97d2-872fda792b21","Type":"ContainerStarted","Data":"dfc80b529a85646f818bc279911434a755816f27cdc0fc1dbf673ecfb649deb6"} Feb 16 12:03:21 crc kubenswrapper[4797]: E0216 12:03:21.984991 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:03:35 crc kubenswrapper[4797]: E0216 12:03:35.991902 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:03:47 crc kubenswrapper[4797]: E0216 12:03:47.985706 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:04:00 crc kubenswrapper[4797]: E0216 12:04:00.984646 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:04:14 crc kubenswrapper[4797]: E0216 12:04:14.985923 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:04:29 crc kubenswrapper[4797]: E0216 12:04:29.985073 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:04:43 crc kubenswrapper[4797]: E0216 12:04:43.988226 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:04:54 crc kubenswrapper[4797]: E0216 12:04:54.987281 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:05:05 crc kubenswrapper[4797]: E0216 12:05:05.992179 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:05:19 crc kubenswrapper[4797]: E0216 12:05:19.984723 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:05:30 crc kubenswrapper[4797]: E0216 12:05:30.985308 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:05:41 crc kubenswrapper[4797]: I0216 12:05:41.703720 4797 patch_prober.go:28] interesting pod/machine-config-daemon-lkgrl container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 12:05:41 crc kubenswrapper[4797]: I0216 12:05:41.704325 4797 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-lkgrl" podUID="128f4e85-fd17-4281-97d2-872fda792b21" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 12:05:41 crc kubenswrapper[4797]: E0216 12:05:41.987000 4797 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-dhgrw" podUID="895bed8d-c376-47ad-8fa6-3cf0f07399c0" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.105358 4797 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l2xs6"] Feb 16 12:05:42 crc kubenswrapper[4797]: E0216 12:05:42.106471 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c129993-5a9a-4dca-9185-5e1e33f77c7b" containerName="keystone-cron" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.106502 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c129993-5a9a-4dca-9185-5e1e33f77c7b" containerName="keystone-cron" Feb 16 12:05:42 crc kubenswrapper[4797]: E0216 12:05:42.106535 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="412f0c19-7637-4a33-89a0-8ec3222bcf77" containerName="registry-server" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.106552 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="412f0c19-7637-4a33-89a0-8ec3222bcf77" containerName="registry-server" Feb 16 12:05:42 crc kubenswrapper[4797]: E0216 12:05:42.106622 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="412f0c19-7637-4a33-89a0-8ec3222bcf77" containerName="extract-content" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.106643 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="412f0c19-7637-4a33-89a0-8ec3222bcf77" containerName="extract-content" Feb 16 12:05:42 crc kubenswrapper[4797]: E0216 12:05:42.106673 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef62f656-527c-4837-80df-375b56517c8e" containerName="extract-utilities" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.106686 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef62f656-527c-4837-80df-375b56517c8e" containerName="extract-utilities" Feb 16 12:05:42 crc kubenswrapper[4797]: E0216 12:05:42.106706 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef62f656-527c-4837-80df-375b56517c8e" containerName="extract-content" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.106718 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef62f656-527c-4837-80df-375b56517c8e" containerName="extract-content" Feb 16 12:05:42 crc kubenswrapper[4797]: E0216 12:05:42.106767 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="412f0c19-7637-4a33-89a0-8ec3222bcf77" containerName="extract-utilities" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.106779 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="412f0c19-7637-4a33-89a0-8ec3222bcf77" containerName="extract-utilities" Feb 16 12:05:42 crc kubenswrapper[4797]: E0216 12:05:42.106797 4797 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef62f656-527c-4837-80df-375b56517c8e" containerName="registry-server" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.106809 4797 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef62f656-527c-4837-80df-375b56517c8e" containerName="registry-server" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.107203 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef62f656-527c-4837-80df-375b56517c8e" containerName="registry-server" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.107247 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="412f0c19-7637-4a33-89a0-8ec3222bcf77" containerName="registry-server" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.107318 4797 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c129993-5a9a-4dca-9185-5e1e33f77c7b" containerName="keystone-cron" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.111039 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2xs6" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.129445 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tsf4\" (UniqueName: \"kubernetes.io/projected/00b6ce26-fce3-4ce6-a089-713823451094-kube-api-access-9tsf4\") pod \"redhat-marketplace-l2xs6\" (UID: \"00b6ce26-fce3-4ce6-a089-713823451094\") " pod="openshift-marketplace/redhat-marketplace-l2xs6" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.129542 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00b6ce26-fce3-4ce6-a089-713823451094-catalog-content\") pod \"redhat-marketplace-l2xs6\" (UID: \"00b6ce26-fce3-4ce6-a089-713823451094\") " pod="openshift-marketplace/redhat-marketplace-l2xs6" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.129982 4797 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00b6ce26-fce3-4ce6-a089-713823451094-utilities\") pod \"redhat-marketplace-l2xs6\" (UID: \"00b6ce26-fce3-4ce6-a089-713823451094\") " pod="openshift-marketplace/redhat-marketplace-l2xs6" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.132320 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2xs6"] Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.231721 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00b6ce26-fce3-4ce6-a089-713823451094-utilities\") pod \"redhat-marketplace-l2xs6\" (UID: \"00b6ce26-fce3-4ce6-a089-713823451094\") " pod="openshift-marketplace/redhat-marketplace-l2xs6" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.231801 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tsf4\" (UniqueName: \"kubernetes.io/projected/00b6ce26-fce3-4ce6-a089-713823451094-kube-api-access-9tsf4\") pod \"redhat-marketplace-l2xs6\" (UID: \"00b6ce26-fce3-4ce6-a089-713823451094\") " pod="openshift-marketplace/redhat-marketplace-l2xs6" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.231825 4797 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00b6ce26-fce3-4ce6-a089-713823451094-catalog-content\") pod \"redhat-marketplace-l2xs6\" (UID: \"00b6ce26-fce3-4ce6-a089-713823451094\") " pod="openshift-marketplace/redhat-marketplace-l2xs6" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.232516 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00b6ce26-fce3-4ce6-a089-713823451094-catalog-content\") pod \"redhat-marketplace-l2xs6\" (UID: \"00b6ce26-fce3-4ce6-a089-713823451094\") " pod="openshift-marketplace/redhat-marketplace-l2xs6" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.232553 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00b6ce26-fce3-4ce6-a089-713823451094-utilities\") pod \"redhat-marketplace-l2xs6\" (UID: \"00b6ce26-fce3-4ce6-a089-713823451094\") " pod="openshift-marketplace/redhat-marketplace-l2xs6" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.254194 4797 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tsf4\" (UniqueName: \"kubernetes.io/projected/00b6ce26-fce3-4ce6-a089-713823451094-kube-api-access-9tsf4\") pod \"redhat-marketplace-l2xs6\" (UID: \"00b6ce26-fce3-4ce6-a089-713823451094\") " pod="openshift-marketplace/redhat-marketplace-l2xs6" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.444455 4797 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l2xs6" Feb 16 12:05:42 crc kubenswrapper[4797]: I0216 12:05:42.937910 4797 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l2xs6"] Feb 16 12:05:43 crc kubenswrapper[4797]: I0216 12:05:43.207419 4797 generic.go:334] "Generic (PLEG): container finished" podID="00b6ce26-fce3-4ce6-a089-713823451094" containerID="7306dd4dbaabb0b64890b2f5611fcc36f8d9d297210dd3c0506c58d0819e520a" exitCode=0 Feb 16 12:05:43 crc kubenswrapper[4797]: I0216 12:05:43.207912 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2xs6" event={"ID":"00b6ce26-fce3-4ce6-a089-713823451094","Type":"ContainerDied","Data":"7306dd4dbaabb0b64890b2f5611fcc36f8d9d297210dd3c0506c58d0819e520a"} Feb 16 12:05:43 crc kubenswrapper[4797]: I0216 12:05:43.207988 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2xs6" event={"ID":"00b6ce26-fce3-4ce6-a089-713823451094","Type":"ContainerStarted","Data":"6335bf9a636b011cbe9ce1ba7ccd466c539670d0e0d8cfaa493a3c71dcbd1fbc"} Feb 16 12:05:45 crc kubenswrapper[4797]: I0216 12:05:45.227512 4797 generic.go:334] "Generic (PLEG): container finished" podID="00b6ce26-fce3-4ce6-a089-713823451094" containerID="6c230e982042ad8e98ea1f3184b7a66517110be596679bcba03e9a88eb6ec793" exitCode=0 Feb 16 12:05:45 crc kubenswrapper[4797]: I0216 12:05:45.227629 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2xs6" event={"ID":"00b6ce26-fce3-4ce6-a089-713823451094","Type":"ContainerDied","Data":"6c230e982042ad8e98ea1f3184b7a66517110be596679bcba03e9a88eb6ec793"} Feb 16 12:05:46 crc kubenswrapper[4797]: I0216 12:05:46.239753 4797 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l2xs6" event={"ID":"00b6ce26-fce3-4ce6-a089-713823451094","Type":"ContainerStarted","Data":"de605abc12894363841de621cf0b9837f4fd3acc13c2ebf478610c0965cd34eb"} Feb 16 12:05:46 crc kubenswrapper[4797]: I0216 12:05:46.276427 4797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l2xs6" podStartSLOduration=1.765904936 podStartE2EDuration="4.276398397s" podCreationTimestamp="2026-02-16 12:05:42 +0000 UTC" firstStartedPulling="2026-02-16 12:05:43.212495235 +0000 UTC m=+3537.932680215" lastFinishedPulling="2026-02-16 12:05:45.722988656 +0000 UTC m=+3540.443173676" observedRunningTime="2026-02-16 12:05:46.262236605 +0000 UTC m=+3540.982421585" watchObservedRunningTime="2026-02-16 12:05:46.276398397 +0000 UTC m=+3540.996583417"